Note that there are some explanatory texts on larger screens.

plurals
  1. POCache multiple pages/images from Instagram
    primarykey
    data
    text
    <p>I'm working on a small project where the users can see images taged by, in this case, "kitties". Instagram only allows 5000 requests/hour, i don't think it will reach this, but i'm choosing to cache any way. Also because i can't figure out how to get the back-link to work. I can only get the link for next page, then the link for recent page becomes the current page, a link to itself. Also, the api can return strange number of images, some times 14, some times 20 and so on. I want it to always show 20 images per page and only have 5 pages (100 images). And then update this file each 5/10 minutes or something.</p> <p>So, my plan is to store like 100 images into a file. I got it working, but it's incredible slow. The code looks like this:</p> <pre><code>$cachefile = "instagram_cache/".TAG.".cache"; $num_requests = 0; //Just for developing and check how many request it does //If the file does not exsists or is older than *UPDATE_CACHE_TIME* seconds if (!file_exists($cachefile) || time()-filemtime($cachefile) &gt; UPDATE_CACHE_TIME) { $images = array(); $current_file = "https://api.instagram.com/v1/tags/".TAG."/media/recent?client_id=".INSTAGRAM_CLIENT_ID; $current_image_index = 0; for($i = 0; $i &gt;= 0; $i++) { //Get data from API $contents = file_get_contents($current_file); $num_requests++; //Decode it! $json = json_decode($contents, true); //Get what we want! foreach ($json["data"] as $x =&gt; $value) { array_push($images, array( 'img_nr' =&gt; $current_image_index, 'thumb' =&gt; $value["images"]["thumbnail"]["url"], 'fullsize' =&gt; $value["images"]["standard_resolution"]["url"], 'link' =&gt; $value["link"], 'time' =&gt; date("d M", $value["created_time"]), 'nick' =&gt; $value["user"]["username"], 'avatar' =&gt; $value["user"]["profile_picture"], 'text' =&gt; $value['caption']['text'], 'likes' =&gt; $value['likes']['count'], 'comments' =&gt; $value['comments']['data'], 'num_comments' =&gt; $value['comments']['count'], )); //Check if the requested amount of images is equal or more... if($current_image_index &gt; MAXIMUM_IMAGES_TO_GET) break; $current_image_index++; } //Check if the requested amount of images is equal or more, even in this loop... if($current_image_index &gt; MAXIMUM_IMAGES_TO_GET) break; if($json['pagination']['next_url']) $current_file = $json['pagination']['next_url']; else break; //No more files to get! } file_put_contents($cachefile, json_encode($images)); </code></pre> <p>This feels like a very ugly hack, any ideas for how to make this work better?</p> <p>Or someone that can tell me how to make that "back-link" to work like it should? (Yes, i could yes js and go -1 in history, but no!).</p> <p>Any ideas, suggestions, help, comments etc are appreciated.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. COHaving this same problem Gubbfett and implementing a similar solution as Instagram seems to have so many damn limitations in place with regards to requests per hour, not allowing multi tag searches, and maxiumum images per request etc. Have you worked out why this was so slow as I would think reading json directly off the disk would be super fast and the only delay would be rendering the images to the screen?
      singulars
    2. COWell, the thing that was slow was to grab images, when the images finally was collected it was no problem getting them. I how ever made some changes and set up a mysql db to store the images and then runs that file as a cron every 5 minutes. It was faster and more reliable, and if the cron is running, the users can still get images since it's not writing to a file in that moment. Another workaround i tested for the 5000 imgaes / hour was just to create multiple accounts the the app and have a controller that changes id-number for every search. ;)
      singulars
    3. COYep i have done something similar and it's working really well especially with the requirement to have multiple tag searching as I can now search the cached JSON files direct which contain a list of all tags per media item in the results. The only trick is how to get as many results as possible (ie.100+) sitting in the cached results at all times as Instagram only lets me grab 30 at a time. I was going to try using the paging options to try and pull back more using a CRON job in the background. I love the idea of multiple accounts being used to switch the ID's very clever :)
      singulars
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload