Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>You can potentially see performance improvements by adding index(es), it depends a lot on the specifics :)</p> <p>How much of the total size of the row are your predicated columns? How many rows do you expect to match? Do you need to return all rows that match the predicate, or just top 1 or top n rows?</p> <p>If you are searching for values with high selectivity/uniqueness (so few rows to return), and the predicated columns are a smallish portion of the entire row size, an index could be quite useful. It will still be a scan, but your index will fit more rows per page than the source table.</p> <p>Here is an example where the total row size is much greater than the column size to search across:</p> <pre><code>create table t1 (v1 varchar(100), b1 varbinary(8000)) go --add 10k rows of filler insert t1 values ('abc123def', cast(replicate('a', 8000) as varbinary(8000))) go 10000 --add 1 row to find insert t1 values ('abc456def', cast(replicate('a', 8000) as varbinary(8000))) go set statistics io on go select * from t1 where v1 like '%456%' --shows 10001 logical reads --create index that only contains the column(s) to search across create index t1i1 on t1(v1) go select * from t1 where v1 like '%456%' --or can force to --shows 37 logical reads </code></pre> <p>If you look at the actual execution plan you can see the engine scanned the index and did a bookmark lookup on the matching row. Or you can tell the optimizer directly to use the index, if it hadn't decide to use this plan on its own: select * from t1 with (index(t1i1)) where v1 like '%456%'</p> <p>If you have a bunch of columns to search across only a few that are highly selective, you could create multiple indexes and use a reduction approach. E.g. first determine a set of IDs (or whatever your PK is) from your highly selective index, then search your less selective columns with a filter against that small set of PKs.</p> <p>If you always need to return a large set of rows you would almost certainly be better off with a table scan.</p> <p>So the possible optimizations depend a lot on the specifics of your table definition and the selectivity of your data.</p> <p>HTH! -Adrian</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload