Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>A few days ago you asked the (nearly) same question and got no answers. I had not the time to answer your questions but now it is time to write some remarks. </p> <p>First of all your question(s) and (most of) the answers and comments show a big misunderstanding of <code>NSImage</code>, <code>NSImageRep</code> and the image stored in the filesystem.</p> <p>The image stored in the filesystem is a complicated data structure which not only contains all the pixels of an image (if it is a raster image) but also a lot of metadata: comments, some dates, informations about the camera, thumbnail images and all this sometimes in different formats: exif, photoshop, xml etc. So you cannot assume that the size of the file has something to do with the image in the computer to be depicted on the screen or to be asked for some special properties. To get these data for further usage you can do:</p> <pre><code>NSData *imgData = [NSData dataWithContentsOfURL:url]; </code></pre> <p>or</p> <pre><code>NSData *imgData = [NSData dataWithContentsOfFile:[url path]]; </code></pre> <p>or you directly load an image as an object of NSImage:</p> <pre><code>NSImage *image = [[NSImage alloc] initWithContentsOfURL:url]; // similar methods:see the docs </code></pre> <p>And and if you now think this is the file image data transformed into a Cocoa structure you are wrong. An object of the class NSImage is not an image, it is simply a container for zero, one or more image representations. Gif, jpg, png images have always only one representation, tiff may have one ore more and icns have about 5 or 6 image representations.</p> <p>Now we want some information about the image representations:</p> <pre><code>for( NSUInteger i=0; i&lt;[[image representations] count]; i++ ){ // let us assume we have an NSBitmapImagedRep NSBitmapImageRep *rep = [[image representations] objectAtIndex:i]; // get informations about this rep NSUInteger pixelX = [rep pixelsWide]; NSUInteger pixelY = [rep pixelsHigh]; CGFloat sizeX = [rep size].width; CGFloat sizeY = [rep size].height; CGFloat resolutionX = 72.0*pixelX/sizeX; CGFloat resolutionY = 72.0*pixelY/sizeY; // test if there are padding bits per pixel if( [rep bitsPerSample]&gt;=8 ){ NSInteger paddingBits = [rep bitsPerPixel] - [rep bitsPerSample]*[rep samplesPerPixel]; // test if there are padding bytes per row NSInteger paddingBytes = [rep bytesPerRow] - ([rep bitsPerPixel]*[rep pixelsWide]+7)/8; NSUInteger bitmapSize = [rep bytesPerRow] * [rep pixelsHigh]; } </code></pre> <p>Another remark: you said:</p> <blockquote> <p>I scanned an image (.tiff) with Macintosh millions of colors which means 24 bits per pixel.</p> </blockquote> <p>No, that need not be so. If a pixel has only three components it may not only use 24 but sometimes 32 bits because of some optimization rules. Ask the rep. It will tell you the truth. And ask for the bitmsapFormat! (Details in the doc).</p> <p>Finally: you need not use the CG-functions. NSImage and NSImageRep do it all.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload