Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <pre><code>&gt; I wrote a JSON-API in NodeJS for a small project, running behind an &gt; Apache webserver. </code></pre> <p>I would just run the API on different port and not behind apache(proxy??). If you want to proxy I would advice you to use <a href="http://nginx.net/" rel="nofollow">NGINX</a>. See Ryan Dahl's <a href="http://s3.amazonaws.com/four.livejournal/20091117/jsconf.pdf" rel="nofollow">slides</a> discussing Apache vs NGINX(Slides 8+). NGINX can also do compression/caching(fast). Maybe you should not compress all your JSON(size? few KB?). I recommendt you to read <a href="http://code.google.com/speed/page-speed/docs/payload.html#GzipCompression" rel="nofollow">Google's Page Speed "Minimum payload size" section</a>(good read!) explaining that, which I also quote below:</p> <blockquote> <p>Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger.</p> </blockquote> <pre><code>&gt; Now I'd like to improve performance by adding caching and compression </code></pre> <p>You could do compression/caching via <a href="http://www.igvita.com/2008/02/11/nginx-and-memcached-a-400-boost/" rel="nofollow">NGINX(+memcached)</a> which is going to be very fast. Even more prefered would be a CDN(for static files) which are optimized for this purpose. I don't think you should be doing any compressing in node.js, although some modules are available through <a href="http://search.npmjs.org/" rel="nofollow">NPM's search</a>(search for gzip) like for example <a href="https://github.com/saikat/node-gzip" rel="nofollow">https://github.com/saikat/node-gzip</a></p> <p>For caching I would advice you to have a look at <a href="http://redis.io" rel="nofollow">redis</a> which is extremely fast. It is even going to be faster than most client libraries because node.js <a href="https://github.com/mranney/node_redis" rel="nofollow">fast client library(node_redis)</a> uses <a href="https://github.com/antirez/hiredis" rel="nofollow">hiredis</a>(C). For this it is important to also install <code>hiredis</code> via npm:</p> <pre><code>npm install hiredis redis </code></pre> <p>Some benchmarks with hiredis</p> <pre><code>PING: 20000 ops 46189.38 ops/sec 1/4/1.082 SET: 20000 ops 41237.11 ops/sec 0/6/1.210 GET: 20000 ops 39682.54 ops/sec 1/7/1.257 INCR: 20000 ops 40080.16 ops/sec 0/8/1.242 LPUSH: 20000 ops 41152.26 ops/sec 0/3/1.212 LRANGE (10 elements): 20000 ops 36563.07 ops/sec 1/8/1.363 LRANGE (100 elements): 20000 ops 21834.06 ops/sec 0/9/2.287 &gt; The API calls have unique URLs (e.g. /api/user-id/content) and I want &gt; to cache them for at least 60 seconds. </code></pre> <p>You can achieve this caching easily thanks to redis's <a href="http://redis.io/commands/setex" rel="nofollow">setex</a> command. This is going to be extremely fast.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload