Thursday, May 24, 2012

UIImagePickerController, UIImage, Memory and More?


I've noticed that there are many questions about how to handle UIImage objects, especially in conjunction with UIImagePickerController and then displaying it in a view (usually a UIImageView ). Here is a collection of common questions and their answers. Feel free to edit and add your own.



I obviously learnt all this information from somewhere too. Various forum posts, StackOverflow answers and my own experimenting brought me to all these solutions. Credit goes to those who posted some sample code that I've since used and modified. I don't remember who you all are - but hats off to you!



How Do I Select An Image From the User's Images or From the Camera?



You use UIImagePickerController . The documentation for the class gives a decent overview of how one would use it, and can be found here .



Basically, you create an instance of the class, which is a modal view controller, display it, and set yourself (or some class) to be the delegate. Then you'll get notified when a user selects some form of media (movie or image in 3.0 on the 3GS), and you can do whatever you want.



My Delegate Was Called - How Do I Get The Media?



The delegate method signature is the following:




- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info;



You should put a breakpoint in the debugger to see what's in the dictionary, but you use that to extract the media. For example:




UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage];



There are other keys that work as well, all in the documentation .



OK, I Got The Image, But It Doesn't Have Any Geolocation Data. What gives?



Unfortunately, Apple decided that we're not worthy of this information. When they load the data into the UIImage , they strip it of all the EXIF/Geolocation data. But, see the answer to the next question for a way to get at the original image data (on iOS 4+)



Can I Get To The Original File Representing This Image on the Disk?



As of iOS 4, you can, but it's very annoying. Use the following code to get an AssetsLibrary URL for the image, and then pass the URL to assetForURL:resultBlock:failureBlock:




NSURL *referenceURL = [info objectForKey:UIImagePickerControllerReferenceURL];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:referenceURL resultBlock:^(ALAsset *asset) {
// code to handle the asset here
} failureBlock:^(NSError *error) {
// error handling
}];
[library release];



It's annoying because the user is asked if your application can access your current location, which is rather confusing since you are actually trying to access the user's photo library. Unless you're actually trying to get at the EXIF location data, the user is going to be a bit confused.



Make sure to include the AssetsLibrary framework to make this work.



How Can I Look At The Underlying Pixels of the UIImage ?



Since the UIImage is immutable, you can't look at the direct pixels. However, you can make a copy. The code to this looks something like this:




UIImage* image = ...; // An image
NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
unsigned char* pixelBytes = (unsigned char *)[pixelData bytes];

// Take away the red pixel, assuming 32-bit RGBA
for(int i = 0; i < [pixelData length]; i += 4) {
pixelBytes[i] = 0; // red
pixelBytes[i+1] = pixelBytes[i+1]; // green
pixelBytes[i+2] = pixelBytes[i+2]; // blue
pixelBytes[i+3] = pixelBytes[i+3]; // alpha
}



However, note that CGDataProviderCopyData provides you with an "immutable" reference to the data - meaning you can't change it (and you may get a BAD_ACCESS error if you do). Look at the next question if you want to see how you can modify the pixels.



How Do I Modify The Pixels of the UIImage ?



The UIImage is immutable, meaning you can't change it. Apple posted a great article on how to get a copy of the pixels and modify them, and rather than copy and paste it here, you should just go read the article .



Once you have the bitmap context as they mention in the article, you can do something similar to this to get a new UIImage with the modified pixels:




CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];



Do remember to release your references though, otherwise you're going to be leaking quite a bit of memory.



After I Select 3 Images From The Camera, I Run Out Of Memory. Help!



You have to remember that even though on disk these images take up only a few hundred kilobytes at most, that's because they're compressed as a PNG or JPG. When they are loaded into the UIImage , they become uncompressed. A quick over-the-envelope calculation would be:




width x height x 4 = bytes in memory



That's assuming 32-bit pixels. If you have 16-bit pixels (some JPGs are stored as RGBA-5551), then you'd replace the 4 with a 2 .



Now, images taken with the camera are 1600 x 1200 pixels, so let's do the math:




1600 x 1200 x 4 = 7,680,000 bytes = ~8 MB



8 MB is a lot, especially when you have a limit of around 24 MB for your application. That's why you run out of memory.



OK, I Understand Why I Have No Memory. What Do I Do?



There is never any reason to display images at their full resolution. The iPhone has a screen of 480 x 320 pixels, so you're just wasting space. If you find yourself in this situation, ask yourself the following question: Do I need the full resolution image?



If the answer is yes, then you should save it to disk for later use.



If the answer is no, then read the next part.



Once you've decided what to do with the full-resolution image, then you need to create a smaller image to use for displaying. Many times you might even want several sizes for your image: a thumbnail, a full-size one for displaying, and the original full-resolution image.



OK, I'm Hooked. How Do I Resize the Image?



Unfortunately, there is no defined way how to resize an image. Also, it's important to note that when you resize it, you'll get a new image - you're not modifying the old one.



There are a couple of methods to do the resizing. I'll present them both here, and explain the pros and cons of each.



Method 1: Using UIKit




+ (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize;
{
// Create a graphics image context
UIGraphicsBeginImageContext(newSize);

// Tell the old image to draw in this new context, with the desired
// new size
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];

// Get the new image from the context
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();

// End the context
UIGraphicsEndImageContext();

// Return the new image.
return newImage;
}



This method is very simple, and works great. It will also deal with the UIImageOrientation for you, meaning that you don't have to care whether the camera was sideways when the picture was taken. However , this method is not thread safe, and since thumbnailing is a relatively expensive operation (approximately ~2.5s on a 3G for a 1600 x 1200 pixel image), this is very much an operation you may want to do in the background, on a separate thread.



Method 2: Using CoreGraphics




+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize;
{
CGFloat targetWidth = newSize.width;
CGFloat targetHeight = newSize.height;

CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);

if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}

CGContextRef bitmap;

if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

}

if (sourceImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -targetHeight);

} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -targetWidth, 0);

} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, radians(-180.));
}

CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];

CGContextRelease(bitmap);
CGImageRelease(ref);

return newImage;
}



The benefit of this method is that it is thread-safe, plus it takes care of all the small things (using correct color space and bitmap info, dealing with image orientation) that the UIKit version does.



How Do I Resize and Maintain Aspect Ratio (like the AspectFill option)?



It is very similar to the method above, and it looks like this:




+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize;
{
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);

if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;

if (widthFactor > heightFactor) {
scaleFactor = widthFactor; // scale to fit height
}
else {
scaleFactor = heightFactor; // scale to fit width
}

scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;

// center the image
if (widthFactor > heightFactor) {
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor < heightFactor) {
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}

CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);

if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}

CGContextRef bitmap;

if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

}

// In the right or left cases, we need to switch scaledWidth and scaledHeight,
// and also the thumbnail point
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;

CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -targetHeight);

} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;

CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -targetWidth, 0);

} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, radians(-180.));
}

CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];

CGContextRelease(bitmap);
CGImageRelease(ref);

return newImage;
}



The method we employ here is to create a bitmap with the desired size, but draw an image that is actually larger, thus maintaining the aspect ratio.



So We've Got Our Scaled Images - How Do I Save Them To Disk?



This is pretty simple. Remember that we want to save a compressed version to disk, and not the uncompressed pixels. Apple provides two functions that help us with this (documentation is here ):




NSData* UIImagePNGRepresentation(UIImage *image);
NSData* UIImageJPEGRepresentation (UIImage *image, CGFloat compressionQuality);



And if you want to use them, you'd do something like:




UIImage* myThumbnail = ...; // Get some image
NSData* imageData = UIImagePNGRepresentation(myThumbnail);



Now we're ready to save it to disk, which is the final step (say into the documents directory):




// Give a name to the file
NSString* imageName = @"MyImage.png";

// Now, we have to find the documents directory so we can save it
// Note that you might want to save it elsewhere, like the cache directory,
// or something similar.
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];

// Now we get the full path to the file
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName];

// and then we write it out
[imageData writeToFile:fullPathToFile atomically:NO];



You would repeat this for every version of the image you have.



How Do I Load These Images Back Into Memory?



Just look at the various UIImage initialization methods, such as +imageWithContentsOfFile: in the Apple documentation .


Source: Tips4all

7 comments:

  1. If you want to save the UIImage back into your user's photo roll there's a built in method for doing this as well.

    UIImageWriteToSavedPhotosAlbum( UIImage* image, id target, SEL action, void* userdata);


    Here's the signature of the saving finished callback (the action above):

    - (void) image:(UIImage*)image didFinishSavingWithError:(NSError *)error contextInfo:(NSDictionary*)info;


    You can, of course, omit the saving callback, but saving to the photo roll is non-atomic so you probably want some indicator.

    ReplyDelete
  2. awesome.. thanks! Your method name for resizing should say targetSize instead of newSize.
    And, Xcode doesn't like your radians() method. Am I missing something?

    ReplyDelete
  3. Great tutorial. Thanks. But I can't get the "modify pixels" code working.


    I guess bytes[i] should be changed to pixelBytes[i]?
    I get a dereferencing void* pointer warning and an invalid use of void expression error when compiling.


    Your code:

    UIImage* image = ...; // An image
    NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
    void* pixelBytes = [pixelData bytes];

    // Take away the red pixel, assuming 32-bit RGBA
    for(int i = 0; i < [pixelData length]; i += 4) {
    bytes[i] = 0; // red
    bytes[i+1] = bytes[i+1]; // green
    bytes[i+2] = bytes[i+2]; // blue
    bytes[i+3] = bytes[i+3]; // alpha
    }

    ReplyDelete
  4. I think that when you draw an image into a new graphics context and then create a fresh UIImage from that context, it removes the orientation information by default.

    So, the following code may get the "rotate and scale while maintaining aspect ratio" functionality you're looking for.

    However, I don't understand the differences between the thread safe and non-thread safe examples above. If this isn't thread safe then please let me know!

    + (UIImage *) scaleImage: (UIImage *)image scaleFactor:(float)scaleBy
    {
    CGSize size = CGSizeMake(image.size.width * scaleBy, image.size.height * scaleBy);

    UIGraphicsBeginImageContext(size);
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGAffineTransform transform = CGAffineTransformIdentity;

    transform = CGAffineTransformScale(transform, scaleBy, scaleBy);
    CGContextConcatCTM(context, transform);

    // Draw the image into the transformed context and return the image
    [image drawAtPoint:CGPointMake(0.0f, 0.0f)];
    UIImage *newimg = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return newimg;
    }

    ReplyDelete
  5. Apple does not "strip the EXIF data" from images. The thing is, you get the raw image data BEFORE the EXIF data is ever added. EXIF only makes sense in the context of an image format like JPEG or PNG, which you do not have to start with...

    The real problem is that when you build PNG or JPG representations, EXIF is not added at that time.

    You can however add it yourself - once you have a JPG or PNG, you can write what EXIF you like to it using the iPhone-exif library:

    http://code.google.com/p/iphone-exif/

    ReplyDelete
  6. Thank you. It helped me a lot, but it seems to me that you forgot to check image orientation while calculating scaleFactor in imageWithImage:scaledToSizeWithSameAspectRatio: method.

    CGFloat widthFactor = targetWidth / height;
    CGFloat heightFactor = targetHeight / width;

    if ((sourceImage.imageOrientation == UIImageOrientationUp) || (sourceImage.imageOrientation == UIImageOrientationDown)) {
    widthFactor = targetWidth / width;
    heightFactor = targetHeight / height;
    }

    if (widthFactor > heightFactor) {
    scaleFactor = widthFactor; // scale to fit height
    } else {
    scaleFactor = heightFactor; // scale to fit width
    }

    ReplyDelete
  7. For those getting the "unsupported parameter combination" error, here's a revised version of the routine that resizes while maintaining aspect ratio; I've included the fix that's linked to above. No matter how I post it here, some sequence of characters is messing it up. Hopefully it'll be OK if you copy & paste it into an editor:

    - (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeKeepingAspect:(CGSize)targetSize
    {
    CGSize imageSize = sourceImage.size;
    CGFloat width = imageSize.width;
    CGFloat height = imageSize.height;
    CGFloat targetWidth = targetSize.width;
    CGFloat targetHeight = targetSize.height;
    CGFloat scaleFactor = 0.0;
    CGFloat scaledWidth = targetWidth;
    CGFloat scaledHeight = targetHeight;
    CGPoint thumbnailPoint = CGPointMake(0.0, 0.0);

    if (CGSizeEqualToSize(imageSize, targetSize) == NO)
    {
    CGFloat widthFactor = targetWidth / width;
    CGFloat heightFactor = targetHeight / height;

    if (widthFactor > heightFactor)
    {
    scaleFactor = widthFactor; // scale to fit height
    }
    else
    {
    scaleFactor = heightFactor; // scale to fit width
    }

    scaledWidth = width * scaleFactor;
    scaledHeight = height * scaleFactor;

    // center the image
    if (widthFactor > heightFactor)
    {
    thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
    }
    else if (widthFactor < heightFactor)
    {
    thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
    }
    }

    CGContextRef bitmap;
    CGImageRef imageRef = [sourceImage CGImage];
    CGColorSpaceRef genericColorSpace = CGColorSpaceCreateDeviceRGB();
    if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown)
    {
    bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, 8, 4 * targetWidth, genericColorSpace, kCGImageAlphaPremultipliedFirst);

    }
    else
    {
    bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, 8, 4 * targetWidth, genericColorSpace, kCGImageAlphaPremultipliedFirst);

    }

    CGColorSpaceRelease(genericColorSpace);
    CGContextSetInterpolationQuality(bitmap, kCGInterpolationDefault);

    // In the right or left cases, we need to switch scaledWidth and scaledHeight,
    // and also the thumbnail point
    if (sourceImage.imageOrientation == UIImageOrientationLeft)
    {
    thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
    CGFloat oldScaledWidth = scaledWidth;
    scaledWidth = scaledHeight;
    scaledHeight = oldScaledWidth;

    CGContextRotateCTM (bitmap, radians(90));
    CGContextTranslateCTM (bitmap, 0, -targetHeight);

    }
    else if (sourceImage.imageOrientation == UIImageOrientationRight)
    {
    thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
    CGFloat oldScaledWidth = scaledWidth;
    scaledWidth = scaledHeight;
    scaledHeight = oldScaledWidth;

    CGContextRotateCTM (bitmap, radians(-90));
    CGContextTranslateCTM (bitmap, -targetWidth, 0);

    }
    else if (sourceImage.imageOrientation == UIImageOrientationUp)
    {
    // NOTHING
    }
    else if (sourceImage.imageOrientation == UIImageOrientationDown)
    {
    CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
    CGContextRotateCTM (bitmap, radians(-180.));
    }

    CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef);
    CGImageRef ref = CGBitmapContextCreateImage(bitmap);
    UIImage* newImage = [UIImage imageWithCGImage:ref];

    CGContextRelease(bitmap);
    CGImageRelease(ref);

    return newImage;
    }

    ReplyDelete