Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>The first question is where the cost of your algorithm lies, which is most probably not related to the reading of the input. If I understand your algorithm, you are reading all of the numbers (node id) into memory, then you iterate over the links, you pick one node and attempt to find if there are more links that include that node. That algorithm is O(N^2), which for a large number becomes a huge number of operations.</p> <p>If what you need to know is how many of the nodes have a single connection, all you need to maintain in memory is one entry per node (100k). How much information do you need to keep per node? Little, for example to only detect which nodes have a single connection you can store the number of times the node has been seen (0 -> never seen). Read one line at a time, increment the counts for both nodes (cost: O(N) with N being the number of links). At the end of the process, scan the entries and print out those for which the count is 1 (cost: O(M), where M is the number of nodes). The total cost of the algorithm is linear in the sum of the number of nodes + number of edges, with linear space requirements on the number o nodes.</p> <p>Compare that with the original approach: space 400k vs. 160M; operations 10M vs. 10^14.</p> <p>In most programming contests the goal is not to shave the cost of the operations, but to improve the asymptotic complexity of the solution. Naive solutions that seem to work for small inputs won't work for large ones. O(N^2) for a problem of size 10 is basically 100 operations... almost a constant factor! But as the problem increases, with size 100 it becomes 10k, with size 1000 it is 1M...</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload