Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Well, at the extreme risk of receiving a lot of down votes I have to comment on this one. Firstly, this is a good question and well stated to address the specific potential issue you mentioned. However, you have neglected to discuss this "lengthy" process you're trying to accomplish.</p> <p>My experience has thought me one thing...</p> <blockquote> <blockquote> <p>If the question your asking is hard to answer, change the question.</p> </blockquote> </blockquote> <p>Though I know very little of your specific problem, I think this neatly applies to your predicament. As others have mentioned... Temp tables are nasty, creating your own tables for a specific task is nastier still, updating large quantities of data in SQL is expensive. </p> <blockquote> <blockquote> <p>Ask yourself "Can you avoid it all?"</p> </blockquote> </blockquote> <p>Quite often people choose to implement extremely complicated logic in databases because they have a belief that SQL can do it faster. Practically this is a flawed concept, Databases are storage/serialization devices, they are good at storing, updating, locating, and synchronizing access to data. They are not well equipped for processing complex operations. Even after Microsoft's (and others) bastardization of the database by injecting full development languages into it, It cannot perform as optimally as a well written client (*depending on the complexity of the operations, which I suspect you have surpassed).</p> <p>As an example, you have a database of around 2gb worth of raw data. You want to produce a complex report or analysis on the entire data set. Well simply put 2gb of memory is easy to come by, slurp the entire database (or the portion you need) into memory using dictionaries or whatever to create the look-ups you will need. Depending on several factors the whole thing will likely run several times faster than SQL, can easily be unit tested, and will be (IMHO) significantly easier to build, debug, and maintain than a nasty assortment of SPROCs constructing dynamic SQL. Even with more than 2gb of raw data, client caches can easily be created with several existing technologies (B-Trees, ISAM, or the like). </p> <p>The product I work on today has 2.4tb of data in the database and we have not a single sproc, join statement, or even a non-equality where clause.</p> <p>But alas my advice may or may not be pertinent to your specific circumstances since I do not know your objectives or constraints. Hopefully, if nothing else, it will make you ask yourself:</p> <blockquote> <blockquote> <p>"Am I asking the right question?"</p> </blockquote> </blockquote>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload