2
$\begingroup$

I've been looking over sorting algorithms for linked list and here's what I came up with.

As we know, for element access arrays will work better than linked list. Also, there is the benefit of elements being stored in consecutive memory locations: due to spatial locality of cache, arrays give better performance.

Thus for sorting linked list:

  1. Extract all elements from linked list to a array: $O(n)$

  2. Sort the array using quicksort: $\Theta(n \log n)$

  3. Store the elements in linked list: $O(n)$

Is this approach better than other approaches for sorting linked lists such as using merge sort directly, as described on geeksforgeeks.

$\endgroup$
6
  • $\begingroup$ Your approach uses a large block of consecutive memory, and its peak memory usage is double. $\endgroup$ Commented Jan 29, 2019 at 4:56
  • $\begingroup$ Can you explain how? $\endgroup$
    – manjy
    Commented Jan 29, 2019 at 5:35
  • $\begingroup$ It uses an array of length $n$, which is a large block of memory. After allocating space for the array, you have $2n$ elements' worth of memory. $\endgroup$ Commented Jan 29, 2019 at 5:48
  • $\begingroup$ But it will be faster when main concern is speed right? $\endgroup$
    – manjy
    Commented Jan 29, 2019 at 5:51
  • $\begingroup$ Both quicksort and mergesort are $\Theta(n\log n)$, at least on average, so it's hard to say unless you actually try it out. $\endgroup$ Commented Jan 29, 2019 at 6:10

1 Answer 1

1
$\begingroup$

The downside of your algorithm is that you need to allocate enough space to hold pointers to all the elements in the list. This requires knowing the length of the list (which would need traversing it once to count) and you may not have enough memory to get a contiguous space for it.


A variation of classic merge sort will work a lot better for linked lists:

Have a array of $\lceil \log_2 n \rceil$ linked lists initially empty. (or just fix to size 64 because when are you ever going to need to sort anywhere near $2^{64}$ elements).

For each element in the original list you extract it into a single element list and do the following:

for the first list in the array if a list is already there merge the list into that list and remove the list from the array and repeat for the next list in the array.

otherwise move on to the next element in the original list.

Once you've done the last element of the list you merge the lists in the array starting from the lowest index.


This is the algorithm used for linked lists in the linux kernel. And has a few handy features:

  • no allocations needed, no recursion, so no chance of an out of memory situation.

  • you only walk the original list once.

  • the merge passes except the final ones are all with lists of equal length. This means there are few unbalanced passes which would have more traversing of the longer list than actually merging the two lists together in.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.