7 algorithms and data structures

Institute It. Training | Kursus Komputer Jakarta Timur | WA. +628978298280 |

7 algorithms and data structures


Introduction to Algorithms and Data Structures

Algorithms and data structures form the backbone of computer science, enabling efficient data processing, storage, and retrieval. They are fundamental concepts that are essential for solving computational problems effectively.

Algorithms

An algorithm is a step-by-step procedure or formula for solving a problem. It consists of a sequence of instructions that take inputs and produce an output. Key properties of an algorithm include:

  • Correctness: An algorithm should solve the problem it was designed to solve.
  • Efficiency: It should make optimal use of resources, such as time and space.
  • Finiteness: It should have a clear stopping point.

Types of Algorithms

  1. Sorting Algorithms

    • Bubble Sort: A simple comparison-based sorting algorithm that repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.
    • Merge Sort: A divide-and-conquer algorithm that divides the list into halves, recursively sorts them, and then merges the sorted halves.
    • Quick Sort: Another divide-and-conquer algorithm that selects a 'pivot' element and partitions the array around the pivot, recursively sorting the partitions.
  2. Searching Algorithms

    • Linear Search: A straightforward method that checks each element in the list sequentially until the desired element is found or the list ends.
    • Binary Search: A more efficient algorithm for sorted arrays that repeatedly divides the search interval in half until the target value is found or the interval is empty.
  3. Graph Algorithms

    • Dijkstra's Algorithm: Finds the shortest path between nodes in a graph with non-negative edge weights.
    • Depth-First Search (DFS): Explores as far as possible along each branch before backtracking, useful for pathfinding and topological sorting.
    • Breadth-First Search (BFS): Explores all neighbor nodes at the present depth before moving on to nodes at the next depth level, used for shortest path in unweighted graphs.
  4. Dynamic Programming Algorithms

    • Fibonacci Sequence: Computes the nth Fibonacci number using previously computed values to avoid redundant calculations.
    • Knapsack Problem: Determines the most valuable combination of items to include in a knapsack without exceeding its capacity.

Data Structures

Data structures are ways of organizing and storing data so that they can be accessed and modified efficiently. They define the relationship between the data and the operations that can be performed on them.

Types of Data Structures

  1. Arrays

    • A collection of elements identified by index or key, stored in contiguous memory locations.
    • Efficient for accessing elements by index but expensive for insertions and deletions.
  2. Linked Lists

    • A collection of nodes, each containing data and a reference (or link) to the next node in the sequence.
    • Efficient for insertions and deletions but less efficient for accessing elements by index.
  3. Stacks

    • A linear data structure that follows the Last-In-First-Out (LIFO) principle.
    • Operations: push (insert an element), pop (remove the top element), peek (retrieve the top element without removing it).
  4. Queues

    • A linear data structure that follows the First-In-First-Out (FIFO) principle.
    • Operations: enqueue (insert an element at the end), dequeue (remove the element from the front).
  5. Trees

    • A hierarchical data structure consisting of nodes, with a single node as the root and all other nodes connected by edges.
    • Binary Trees: Each node has at most two children.
    • Binary Search Trees (BST): A binary tree where the left child contains only nodes with values less than the parent node, and the right child contains only nodes with values greater than the parent node.
  6. Heaps

    • A specialized tree-based data structure that satisfies the heap property.
    • Max Heap: The key at the root is the maximum among all keys present in the binary heap.
    • Min Heap: The key at the root is the minimum among all keys present in the binary heap.
  7. Graphs

    • A collection of nodes (vertices) and edges connecting pairs of nodes.
    • Types of graphs: directed, undirected, weighted, and unweighted.
  8. Hash Tables

    • A data structure that implements an associative array abstract data type, a structure that can map keys to values.
    • Uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found.

Choosing the Right Algorithm and Data Structure

The choice of algorithm and data structure depends on the specific problem requirements:

  • Time Complexity: Measure of the time an algorithm takes to complete as a function of the length of the input.
  • Space Complexity: Measure of the amount of memory an algorithm uses as a function of the length of the input.
  • Data Type: The nature of the data being processed (e.g., integers, strings, objects).
  • Operations: The types of operations that need to be performed (e.g., insertions, deletions, searches).

Conclusion

Understanding algorithms and data structures is crucial for efficient problem-solving and optimizing performance in software development. By mastering these fundamental concepts, you can write code that is not only correct but also efficient and scalable.

from
https://intitute.blogspot.com/2018/10/7-algorithms-and-data-structures.html


Sort Algorithms,  Search Algorithms, Hashing,  Dynamic Programming, Exponentiation by squaring,  String Matching and Parsing,  Primality Testing Algorithms

1. Sort Algorithms

Sorting is the most heavily studied concept in Computer Science. Idea is to arrange the items of a list in a specific order. Though every major programming language has built-in sorting libraries, it comes in handy if you know how they work. Depending upon requirement you may want to use any of these.
  • Merge Sort
  • Quick Sort
  • Bucket Sort
  • Heap Sort
  • Counting Sort
More importantly one should know when and where to use them. Some examples where you can find direct application of sorting techniques include:
  • Sorting by price, popularity etc in e-commerce websites

2. Search Algorithms

Binary Search (in linear data structures)
Binary search is used to perform a very efficient search on sorted dataset. The time complexity is O(log2N). Idea is to repeatedly divide in half the portion of the list that could contain the item, until we narrow it down to one possible item. Some applications are:
  • When you search for a name of song in a sorted list of songs, it performs binary search and string-matching to quickly return the results.
  • Used to debug in git through git bisect
Depth/Breadth First Search (in Graph data structures)

DFS and BFS are tree/graph traversing and searching data structures. We wouldn’t go deep into how DFS/BFS work but will see how they are different through following animation.
Applications:
  • Used by search engines for web-crawling
  • Used in artificial intelligence to build bots, for instance a chess bot
  • Finding shortest path between two cities in a map and many other such applications

3. Hashing

Hash lookup is currently the most widely used technique to find appropriate data by key or ID. We access data by its index. Previously we relied on Sorting+Binary Search to look for index whereas now we use hashing.
The data structure is referred as Hash-Map or Hash-Table or Dictionary that maps keys to values, efficiently. We can perform value lookups using keys. Idea is to use an appropriate hash function which does the key -> value mapping. Choosing a good hash function depends upon the scenario.
Applications:
  • In routers, to store IP address -> Path pair for routing mechanisms
  • To perform the check if a value already exists in a list. Linear search would be expensive. We can also use Set data structure for this operation.

4. Dynamic Programming

Dynamic programming (DP) is a method for solving a complex problem by breaking it down into simpler subproblems. We solve the subproblems, remember their results and using them we make our way to solve the complex problem, quickly.

*writes down “1+1+1+1+1+1+1+1=” on a sheet of paper* What’s that equal to?

*counting* Eight!

*writes down another “1+” on the left* What about that?

*quickly* Nine!

How’d you know it was nine so fast?

You just added one more

So you didn’t need to recount because you remembered there were eight! Dynamic Programming is just a fancy way to say ‘remembering stuff to save time later’
Applications:
  • There are many DP algorithms and applications but I’d name one and blow you away, Duckworth-Lewis method in cricket.

5. Exponentiation by squaring

Say you want to calculate 232. Normally we’d iterate 32 times and find the result. What if I told you it can be done in 5 iterations?
Exponentiation by squaring or Binary exponentiation is a general method for fast computation of large positive integer powers of a number in O(log2N). Not only this, the method is also used for computation of powers of polynomials and square matrices.
Application:
  • Calculation of large powers of a number is mostly required in RSA encryption. RSA also uses modular arithmetic along with binary exponentiation.

6. String Matching and Parsing

Pattern matching/searching is one of the most important problem in Computer Science. There have been a lot of research on the topic but we’ll enlist only two basic necessities for any programmer.
KMP Algorithm (String Matching)
Knuth-Morris-Pratt algorithm is used in cases where we have to match a short pattern in a long string. For instance, when we Ctrl+F a keyword in a document, we perform pattern matching in the whole document.
Regular Expression (String Parsing)
Many a times we have to validate a string by parsing over a predefined restriction. It is heavily used in web development for URL parsing and matching.

7. Primality Testing Algorithms

There are deterministic and probabilistic ways of determining whether a given number is prime or not. We’ll see both deterministic and probabilistic (nondeterministic) ways.
Sieve of Eratosthenes (deterministic)
If we have certain limit on the range of numbers, say determine all primes within range 100 to 1000 then Sieve is a way to go. The length of range is a crucial factor, because we have to allocate certain amount of memory according to range.
For any number n, incrementally testing upto sqrt(n) (deterministic)
In case you want to check for few numbers which are sparsely spread over a long range (say 1 to 1012), Sieve won’t be able to allocate enough memory. You can check for each number n by traversing only upto sqrt(n) and perform a divisibility check on n.
Fermat primality test and Miller–Rabin primality test (both are nondeterministic)
Both of these are compositeness tests. If a number is proved to be composite, then it sure isn’t a prime number. Miller-Rabin is a more sophisticated one than Fermat’s. Infact, Miller-Rabin also has a deterministic variant, but then its a game of trade between time complexity and accuracy of the algorithm.
Application:
  • The single most important use of prime numbers is in Cryptography. More precisely, they are used in encryption and decryption in RSA algorithm which was the very first implementation of Public Key Cryptosystems
  • Another use is in Hash functions used in Hash Tables

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.

Disqus Comments