The Mental Blog

Software with Intellect

11 notes

Under the Sheets with iCloud and Core Data: Troubleshooting

I want to finish off this series with a post on troubleshooting, and I’m not going to sugar coat it — there’s lots of trouble to shoot.

You could find much of the material I covered earlier in the series elsewhere, but this post is based on the nitty-gritty, day-to-day issues you only encounter when you try to retrofit a shipping Core Data app with a cool new exhaust in the form of iCloud.

Today we’ll go through the stuff they didn’t teach you in Cocoa school. For many, it may well be the most useful post of the whole series.


Many Core Data apps include what I will call singleton entities. Like singleton classes, these are entities for which there should only be one instance in the store. For example, many apps store certain settings or metadata in a single instance of an entity.

If you have entities like this, you will need to develop a strategy for ensuring uniqueness. If an instance is created on two different devices, you will end up with two instances on each device after iCloud merges changes.

There are two ways to approach this. You could simply check for extra instances after a merge, and use a deterministic system for removing them so that the same instances get removed on each device. For example, you could have a globally unique identifier or creation date attribute in the entity, such that you can sort instances and guarantee that the app is retaining the same instance each time.

Another approach is to create the singleton instances as soon as a new store is setup, and, on those devices seeded from iCloud, do not allow user interaction until the singletons have been merged and can be fetched. In this way, you ensure the singleton entities only get created once.

Side Effects in Accessors

When I first began to integrate iCloud into my Core Data apps, I held to a serious misconception which ended up wasting a considerable amount of time. That misconception was that my source code had no bearing on the Core Data transaction log import process. I thought that the import process was a private implementation detail; while that is largely true, you can in fact influence it, and sometimes that can have a negative impact.

The import takes place in a private context using a private persistent store coordinator, but it still makes use of your managed object classes, and uses key-value coding (KVC) to access properties. That means that if you have any side effects in your custom accessor methods, you could end up undermining the import.

In particular, I have found that creating or deleting objects in an accessor method leads to errors that prevent transaction log imports from completing. For example, take the following setter method.

-(void)setAppearsInSlideshows:(NSNumber *)yn
    [self willChangeValueForKey:@"appearsInSlideshows"];
    [self setPrimitiveValue:yn forKey:@"appearsInSlideshows"];
    [self updateFacetPermutations];
    [self didChangeValueForKey:@"appearsInSlideshows"];

The updateFacetPermutations method creates and deletes objects in order to keep the object graph in a valid state in normal app operation, but these side effects were causing the transaction log imports to fail. Creation and deletion of objects independent of the deltas in the change set is very likely to conflict with the objects being imported.

The solution to this problem is to ensure that your accessors remain as simple as possible. Use the vanilla methods provided by Core Data. If you need more advanced functionality in order to enforce validity in other parts of your code, either create a secondary accessor using a naming scheme not recognized by KVC, or introduce dependent properties.

To demonstrate these alternatives, consider again the setter shown above. The appearsInSlideshows property could be reverted to use the default accessor methods provided by Core Data. To ensure validity of the object graph when mutating in other parts of the source code, a second setter-like method, eg changeAppearsInSlideshowsTo:, could be introduced to take over the function of the original setter. The application code could use this method when changing the property, and the iCloud import would use the standard, unadulterated setter.

If you are developing for the Mac, and using Cocoa Bindings, you may be better off introducing a second, dependent property instead. Your interface can then bind to the dependent property, which includes any side effects to ensure object graph validity, while the iCloud import again adopts the standard accessors.

To demonstrate this option, here is the code I used in my own app for the appearsInSlideshows property.

+(NSSet *)keyPathsForValuesAffectingBindableAppearsInSlideshows
    return [NSSet setWithObject:@"appearsInSlideshows"];

-(void)setBindableAppearsInSlideshows:(NSNumber *)yn
    [self willChangeValueForKey:@"bindableAppearsInSlideshows"];
    self.appearsInSlideshows = yn;
    [self updateFacetPermutations];
    [self didChangeValueForKey:@"bindableAppearsInSlideshows"];

-(NSNumber *)bindableAppearsInSlideshows
    [self willAccessValueForKey:@"bindableAppearsInSlideshows"];
    id result = self.appearsInSlideshows;
    [self didAccessValueForKey:@"bindableAppearsInSlideshows"];
    return result;

The bindableAppearsInSlideshows property is used where previously the appearsInSlideshows property would have been used, including in bindings. The setBindableAppearsInSlideshows: method includes the side effects previously in the setAppearsInSlideshows: method, but these will be avoided during the iCloud import.

Validation Failures

Your app’s source can influence the import process in another way too. The validation rules of your entity model are applied during the import, as are validation checks in your managed object classes. If any fail, the import fails, and will never recover, leaving the app’s data in an inconsistent state across devices.

"But why would the validation ever fail?", you might ask. Unfortunately, it turns out that it is much easier to cause a validation failure than you might expect. Any time you have two devices working simultaneously on the same data, the potential for conflict arises, and that could lead to a validation failure. (Note also that simply setting a merge policy is no guarantee that your managed object context will be left in a valid, savable state. This seems to be a common misconception.)

To demonstrate, consider the following simple entity model: Entity A has a to-one relationship to Entity B called b. Entity B has an inverse to-one relationship called a. Neither relationship is optional.

Assume we have two devices (1) and (2) that begin fully synced. Each has one object of class A, and one object of class B, and they are associated with one another. On device 1, we have objects A1 and B1, and on device 2 we have the same logical objects A1 and B1.

Now assume that simulateous changes are made on each device:

  1. On device 1, we delete B1, insert B2, and associate A1 with B2. Then save changes.
  2. On device 2, we also delete B1, insert B3, and associate A1 with B3. Then save changes.

Device 1 now attempts to import the transaction logs from device 2. B3 will be inserted, and A1 will be associated with B3. So far so good, but B2 is now left with relationship a equal to nil. This relationship is non-optional, so a validation error occurs.

Something similar will occur on device 2, because there are two B objects, and only one A object to associate with. There must thus always be a validation error, because one of the B objects must have its relationship set to nil.

Even worse, any future change will always leave an errant B object hanging around, and will thus fail validation. In effect, the user cannot fix the problem themselves by resetting the relationship. It is permanently broken.

This is more than a theoretical exercise. If you want to see for yourself, download the test app I introduced in previous posts, and carry out the following experiment.

  1. Fill in your team identifier in the source code, and setup provisioning.
  2. Build the test app on two Macs or virtual machines running Lion.
  3. Add a new note on one machine, and start syncing.
  4. Wait for the note to appear on the other machine.
  5. Select the note on each machine, and press the ‘Change Schedule’ button on each.
  6. Now press Save on each machine. Do this at almost the same time, so iCloud cannot sync in between.
  7. Wait for the errors to appear in the console on each Mac.

The errors should look something like this (edited for readability):

2012-06-04 13:09:31.450 iCloudCoreDataTester[72072:239b] 
    -[_PFUbiquityRecordImportOperation main](435): CoreData: Ubiquity:  
    CoreData: Ubiquity: Error saving managed object context changes for transaction log: 
    <PFUbiquityTransactionLog: 0x7fd55ad06920>   
    transactionLogLocation: <PFUbiquityLocation: 0x7fd55ac33a00>: ...

Error: Error Domain=NSCocoaErrorDomain Code=1570 
    "Property/permutation/Entity/ChildSchedule is a required value." 
    {NSValidationErrorObject=<NSManagedObject: 0x7fd55ac3fa10> 
    (entity: ChildSchedule; id: 0x7fd55ac40a70 <x-coredata://01B641DC-E42D-4876-BB6E-7C9F33FEB704/ChildSchedule/p5> ; 
        data: {
            permutation = nil;
            title = "30AD7F4F-1109-44F9-996A-AAE9915150FD-955-0000051A8";
        NSLocalizedDescription=Property/permutation/Entity/ChildSchedule is a required value., 

and later

2012-06-04 13:09:31.451 iCloudCoreDataTester[72072:239b] 
    -[_PFUbiquityRecordsImporter operation:failedWithError:](824): 
    CoreData: Ubiquity:  Import operation encountered a corrupt log file, 
    Error Domain=NSCocoaErrorDomain Code=134302 
    "The operation couldn’t be completed. (Cocoa error 134302.)" UserInfo=0x7fd55ab035d0 
    {underlyingError=Error Domain=NSCocoaErrorDomain Code=1570 
    "Property/permutation/Entity/ChildSchedule is a required value." 
    UserInfo=0x7fd55ad318d0 {NSValidationErrorObject=<NSManagedObject: 0x7fd55ac3fa10> 

These errors are telling you about exactly the scenario I described above: two new objects (ChildSchedule class) are vying for one vacancy in a to-one relationship. One of the objects is left with a nil value for the non-optional permutation relationship.

Living with Validation Failures

It is difficult to see how Apple could come up with a general solution to the problem of validation. In the scenario above, for example, Core Data could just delete the left over object, but that is a pretty extreme measure. The object may not even be new; it may be an existing object that was added to the relationship, so deleting it may have drastic consequences. The ultimate solution is probably that Apple needs to provide more hooks for the developer to resolve validation issues during import, but what do we do in the meantime?

One solution is to simply make all of the relationships in your model optional, or use weak relationships by storing the unique identifiers of related objects, rather than using an explicit relationship in the entity model. This will avoid the validation errors, but at a considerable cost to ease of coding. You will need to add a lot of code to take over the role that Core Data was playing, keeping your object graph in a valid state. At the very least, you will have to scan for stray objects after each merge, and delete or relocate them.

The solution I have chosen is to maintain the same Core Data entity model, with non-optional relationships, but to avoid any validation at all during the transaction log import. It is difficult for me to oversee all of the risks of this without intimate knowledge of the Core Data framework, so please be cautious if you decide to follow my lead. There may be unforeseen issues, but so far at least, it has been working OK for my app.

Here’s how it works: In order to disable validation during the transaction log import, I have overridden the NSManagedObject validateValue:forKey:error: method in a subclass used for all of my entities. The method will only validate if the managed object context class corresponds to custom subclass (MCManagedObjectContext) of my main context. If a standard NSManagedObjectContext instance is used, such as during the iCloud import, the method just returns YES whether the object is valid or not.

-(BOOL)validateValue:(__autoreleasing id *)value forKey:(NSString *)key error:(NSError *__autoreleasing *)error 
    if ( ![self.managedObjectContext isKindOfClass:[MCManagedObjectContext class]] ) {
        return YES;
    else {
        return [super validateValue:value forKey:key error:error];

The benefit of this is that validation errors are ignored during the import phase. They are effectively postponed, arising when you try to save your app’s primary context. At this point you have the means to address the errors and retry the save.

To give you some idea how this works, here is the save method used in my app.

    [managedObjectContext performBlock:^{
        if ( managedObjectContext.hasChanges ) {
            NSUInteger attempts = 0;
            NSError *error = nil;
            while ( ![managedObjectContext save:&error] && ++attempts <= MCMaximumSaveAttempts ) {
                [self repairForSaveError:error];

            if ( attempts > MCMaximumSaveAttempts ) {
                NSString *question = NSLocalizedString(@"A problem arose. Could not save changes.", @"Save fail");
                NSString *info = NSLocalizedString(@"You should quit as soon as possible, "
                    @"because continuing could cause other problems.", @"");
                [self runModalAlertWithMessage:question information:info];

If necessary, several attempts are made to save the context. If a save fails, a method is called to attempt to repair the validation problems.

-(void)repairForSaveError:(NSError *)error
    [managedObjectContext processPendingChanges];
    [managedObjectContext.undoManager disableUndoRegistration];

    if ( error.code != NSValidationMultipleErrorsError ) {
        id object = [error.userInfo objectForKey:@"NSValidationErrorObject"];
        [object repairForError:error];
    else {
        NSArray *detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey];
        for ( NSError *error in detailedErrors ) {
            NSDictionary *detailedInfo = error.userInfo;
            id object = [detailedInfo objectForKey:@"NSValidationErrorObject"];
            [object repairForError:error];

    [managedObjectContext processPendingChanges];
    [managedObjectContext.undoManager enableUndoRegistration];

This method iterates over the validation errors and retrieves the object responsible for each. The violating object itself is then given the opportunity to address the error. The following code comes from the custom managed object class.

    return NO;

-(void)repairForError:(NSError *)error
    if ( [self.class deletesInvalidObjectsAfterFailedSave] ) {
        [self.managedObjectContext deleteObject:self];

The default repair is to do nothing. If an entity class overrides the deletesInvalidObjectsAfterFailedSave method and returns YES, any invalid object is simply deleted. Classes with more advanced requirements can override the repairForError: method to instigate repairs.


One reason getting iCloud working in your Core Data app is so difficult is that debugging is torturously slow and frustrating. You build and run on two machines. Then you make a change on one machine, and wait…and wait…and… It’s even worse if iCloud decides to throttle syncing back, which is quite common. If that happens, you may be better off taking the rest of the day off.

To try to ease the pain a little, I’ve gathered some tips here to help you debug Core Data/iCloud apps. It won’t make it pleasant, but will hopefully save you a bit of time.

First, you need to decide what your debugging setup will be. You need two running OSes to test. You can’t run two apps in the same account, and share a single iCloud container. It would be nice, but you can’t. You also can’t really run in two separate user accounts and test using fast user switching. In my experience, you can get some strange locking issues with file coordinators. So you are left with two options: two different devices, or one device running a virtual machine.

I started testing between my iMac and MacBook Air, but after a few weeks elected to purchase VMware Fusion for my iMac, and do all my testing on that. I don’t regret that decision. It also makes it possible to test future OSes like Mountain Lion, so the virtual machine solution has many advantages.

Once you have a good testing setup, you will need a way to see what is going on. Core Data logs are extremely verbose when using iCloud, maybe too verbose. You will see a lot of messages; many of them will look extremely worrying, but are in fact completely harmless.

If you are debugging a specific problem, you may want to see every detail of the import process. In that case, you can make the console messages even more verbose by using the argument 3

when launching your app. You set this in the Run section of your scheme, but be warned — you will get an awful lot of output.

Glossary of Innoculous Errors

Because the console output of Core Data can be scary to the newcomer, I’ve gathered together a few of the innocuous error messages to help you discern the forest from the trees.

The following occur all the time, and just mean the import couldn’t proceed due to missing files and the like. Core Data will try again in a minute or so, so just ignore these and wait.

2012-06-04 13:09:01.295 iCloudCoreDataTester[72072:6f23] +[PFUbiquityTransactionLog loadPlistAtLocation:withError:](378): 
CoreData: Ubiquity:  Encountered an error trying to open the log file at the location: <PFUbiquityLocation: 0x7fd55ac373a0>: ...
    Error: Error Domain=NSCocoaErrorDomain Code=256 "The file “67BFAE7B-9CE3-432A-AD1F-6EA23C648017.1.cdt” couldn’t be opened." 
    UserInfo=0x7fd55ac3b210 {NSURL=file://localhost...cdt, NSDescription=The item failed to download.}

2012-06-04 13:09:01.298 iCloudCoreDataTester[72072:6f23] -[PFUbiquityTransactionLog loadComparisonMetadataWithError:](244): 
CoreData: Ubiquity:  Error encountered while trying to load the comparison metadata for transaction 
log: <PFUbiquityTransactionLog: 0x7fd558e68a60>

This one seems to arise when Core Data is trying to resolve certain conflicts. It seems to be harmless too.

2012-06-06 17:13:05.913 Mental Case[854:ae53] CoreData: warning: An NSManagedObjectContext delegate overrode fault handling 
behavior to silently delete the object with ID '0x7fdae4dcd3c0 <x-coredata://367CA83F-D0FE-4E19-A52F-873A45EC954C/MCFacetPermutation/p5166>' 
and substitute nil/0 for all property values instead of throwing.

And lastly, the next one is actually a sign that an import was successful. At least, it seems to appear just after a successful import. I actually use it as a notification that the import has gone through.

2012-06-06 16:20:38.711 Mental Case[3631:750f] -[_PFUbiquityStack initWithLocalPeerID:andUbiquityRootLocation:](83): 
CoreData: Ubiquity:  Error encountered while trying to connect to the metadata store: Error Domain=NSCocoaErrorDomain Code=512 
"The file couldn’t be saved." UserInfo=0x7f9b7b41d8f0 {}

Where to Now?

That’s it for this series. What can we conclude? Unfortunately, the most apt conclusion is probably that iCloud syncing of Core Data is not really ready for prime time, at least not for any app with a complex data model. If you have a simple model, and patience, it is doable, even if very few have achieved a shipping app at this point.

Is the future brighter? We have Mountain Lion and iOS 6 just around the corner, and while Apple seems to have addressed some concerns, I still have plenty of reservations. Maybe I’m wrong, I even hope that I am, but only time will tell.

The promise of iCloud is great, but syncing is a very difficult problem, and the complexity of the incremental updates that Core Data introduces doesn’t make it any easier. I think Apple will eventually crack it, but it could be a bumpy ride in the meantime. I hope this series has at least taken some of the rough edges off.

Go forth and sync!

Drew McCormack

  1. slowfocus reblogged this from mentalfaculty
  2. cocoaheads reblogged this from mentalfaculty
  3. mentalfaculty posted this