Core Data is a fantastic framework and I love using it. The framework is sometimes accused of being slow and performing badly with large data sets, but the truth is that the developer is usually the one to blame.

It is true that Core Data can slow your application down. If a truck pulls a heavier load than it was designed for, then the truck will struggle with its load. But who is at fault in that scenario? The truck? Or the driver?

In this tutorial, I list three tips you can use use to improve the performance of your Core Data application.

1. Be Careful What You Ask For

Core Data is a highly optimized persistence framework. Out of the box, it helps you optimize the queries you make to the persistent store through caching, faulting, and other under-the-hood optimizations. No matter how performant and optimized Core Data is, in the end, the framework does what you tell it to do. If you ask for a thousand records, then that is what Core Data will give you.

To get the most out of Core Data, you need to work with the framework, not against it. This includes carefully crafting your fetch requests, only fetching the data you need at a particular point in time. The more precise you are about what you want Core Data to fetch, the better the framework can work for you.

If you are presenting the user with a list of records, then there is no need to fetch a hundred or a thousand records if the screen of the device is only large enough to display ten at a time. Show the user what she asked for plus a healthy margin. Chances are the user is only interested in the most recent records anyway. Don't waste processing power and battery to fetch data that isn't going to be used.

By using limits and carefully crafted predicates, you tell Core Data what you need. If you are using a fetched results controller in your application, then consider using the fetchBatchSize property of its fetch request. Every optimization helps.

2. Don't Block the Main Thread

If the main thread of your application is blocked, even for a brief moment, the user experiences this as your application being unresponsive. If this happens when the user is scrolling a table view or a collection view, then the scrolling is chunky and jittery. Blocking the main thread should be avoided at all cost.

When a managed object context saves its changes, the persistent store coordinator it is linked to pushes the changes to the persistent store. No matter how performant Core Data is, if the changes are written to disk on the main thread, the application's user experience may suffer as a result.

In Core Data Fundamentals, we discussed Core Data and concurrency. In that lesson, I introduced you to a Core Data stack that mitigates the risk of blocking the main thread as a result of changes being written to a persistent store. The architecture of such a Core Data stack isn't complicated.

Core Data Stack

The managed object context that pushes changes to the persistent store coordinator is private. This means that the managed object context does its work on a background thread, using a private queue.

The main managed object context operates on the main thread and is only used for tasks related to the user interface. Additional managed object contexts use the main managed object context as their parent.

This Core Data stack helps reduce performance problems that are related to disk I/O. Because the private managed object context does its work in the background, the user interface doesn't suffer from reading and writing data.

3. Binary Large Objects

Core Data is great for managing relational object graphs. It is less good at managing large BLOBs of data. Even though the framework won't complain if you store large chunks of data in a persistent store, it isn't the best use of Core Data.

If you want Core Data to be performant, then avoid using it as a store for large blocks of binary data, such as videos, images, or audio. That said, there are several solutions for working with BLOBs of data in combination with Core Data.

Most Core Data applications use a SQLite database as backing store. SQLite databases are incredibly performant and capable of handling tens of millions of rows. SQLite is rarely the bottleneck of a Core Data application.

Even though SQLite can easily handle large chunks of binary data, it is important to consider whether the database is the right place for storing that data. If you do decide to store BLOBs in a persistent store, or you have no other option, then it's important to consider performance.

You can reduce the risk of running into performance issues by creating separate entities for binary data. Instead of defining the BLOB as an attribute of an entity, create a separate entity and use a to-one relationship to link the BLOB to the entity it is associated with.

An even better solution is to store BLOBs as files on disk, referencing them as paths in the persistent store. This keeps the persistent store lightweight and it doesn't unnecessarily stress Core Data.

What's Next?

Like concurrency, performance may seem like premature optimization. It isn't, though. The need to modify the data model because of performance problems can easily be avoided. Remember to spend some time crafting the data model to make sure you don't run into problems down the road. Planning is key. Questions? Leave them in the comments below or reach out to me on Twitter.