The app launches perfectly. The test suite shows all green. Your code review finds nothing alarming. Three days after shipping to production, users start reporting that their carefully typed messages are getting truncated at seemingly random points. Not always, but often enough to flood your support inbox.

The investigation leads you to a string manipulation function that was migrated from Objective-C. The code compiles without warnings. The logic looks sound. But it's using Swift's computed count property where the original Objective-C used length, and those two things aren't the same when emoji or complex Unicode characters are involved.

This is the invisible danger of AI-assisted migrations. The code doesn't just compile successfully, it runs successfully, most of the time. The bugs only surface in specific scenarios that your testing didn't cover, with specific data that exposes the semantic mismatches between how Objective-C and Swift handle runtime behavior.

Let me show you the five categories of runtime issues that hide in plain sight in AI-migrated code. These aren't theoretical problems. They're patterns that emerge consistently across converted codebases, waiting to surprise you when real users encounter real data.


When Background Threads Touch the User Interface

Threading in Apple development has always been tricky, but Objective-C's Grand Central Dispatch made certain assumptions that Swift doesn't automatically preserve. The result is one of the most common runtime crashes in migrated apps: User interface updates happening on background threads.

Here's what makes this particularly insidious. The original Objective-C code might look like this:

- (void)loadUserData {
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        NSDictionary *userData = [self.apiClient fetchUserData];
        
        self.nameLabel.text = userData[@"name"];
        self.emailLabel.text = userData[@"email"];
    });
}

This code has a bug. It's updating user interface elements from a background thread, which is not permitted on Apple's platforms. But here's the thing: it might have worked anyway in the Objective-C codebase. Maybe the API client was doing its own dispatch back to the main thread. Maybe the specific dispatch queue being used happened to be the main queue in practice. Maybe it was just luck that it never crashed during testing.

An AI tool might convert this Objective-C snippet to:

func loadUserData() {
    DispatchQueue.global(qos: .default).async {
        let userData = self.apiClient.fetchUserData()
        
        self.nameLabel.text = userData["name"] as? String
        self.emailLabel.text = userData["email"] as? String
    }
}

The conversion is mechanically perfect. Every API maps correctly. The closure syntax is proper Swift. It compiles without warnings. But the bug is still there, and now it's more likely to crash because Swift's runtime is stricter about main thread enforcement.

What makes this pattern particularly dangerous in AI migrations is that the AI doesn't understand execution context. It sees dispatch_async and converts it to DispatchQueue.async, preserving whatever dispatch queue was specified. It doesn't trace through the code to understand what's happening inside that closure. It doesn't recognize that UILabel assignments require the main thread.

The correct Swift version needs explicit main thread handling:

func loadUserData() {
    DispatchQueue.global(qos: .default).async {
        let userData = self.apiClient.fetchUserData()
        
        DispatchQueue.main.async {
            self.nameLabel.text = userData["name"] as? String
            self.emailLabel.text = userData["email"] as? String
        }
    }
}

Or better yet, modern Swift with async/await:

func loadUserData() async {
    let userData = await apiClient.fetchUserData()
    
    nameLabel.text = userData["name"] as? String
    emailLabel.text = userData["email"] as? String
}

With async/await and @MainActor annotations, the compiler helps enforce thread safety. But AI tools don't make these architectural decisions. They convert what's there, which means they preserve latent threading bugs that might have been harmless in Objective-C but become crashes in Swift.

The challenge is that these bugs are intermittent and timing-dependent. They don't show up in quick testing on a fast device. They appear when the network is slow, when the device is under memory pressure, when multiple operations overlap in ways your test scenarios didn't cover. Suddenly you're getting crash reports from production with messages about user interface updates on background threads, and you're hunting through thousands of lines of converted code trying to find every place the AI preserved dangerous threading assumptions.


The Collection Type Guessing Game

Objective-C's collection types were beautifully simple in one way: everything was an NSArray or NSDictionary. The type information was implicit in how you used them. Swift's type system is more explicit and powerful, but that creates a problem for AI tools: they have to guess what those implicit types actually were.

Consider this common pattern from an Objective-C networking layer:

- (NSArray *)parseUserList:(NSData *)data {
    NSArray *json = [NSJSONSerialization JSONObjectWithData:data options:0 error:nil];
    NSMutableArray *users = [NSMutableArray array];
    
    for (NSDictionary *userDict in json) {
        User *user = [[User alloc] initWithDictionary:userDict];
        [users addObject:user];
    }
    
    return users;
}

An AI tool has several choices here. It might convert this to:

func parseUserList(_ data: Data) -> [Any] {
    let json = try? JSONSerialization.jsonObject(with: data, options: []) as? [Any]
    var users: [Any] = []
    
    for userDict in json ?? [] {
        if let dict = userDict as? [String: Any] {
            let user = User(dictionary: dict)
            users.append(user)
        }
    }
    
    return users
}

This compiles. It even works. But it's lost all the type information. The array is [Any], which means every call site now needs runtime type checks and casts. It's Objective-C's dynamic typing translated into Swift syntax, not proper Swift.

The AI tool made this choice because it couldn't be certain what the JSON structure actually was. Without runtime information or comprehensive tests showing the actual data shapes, it defaulted to the safest possible types: Any everywhere. But "safe" in terms of compilation isn't the same as "correct" in terms of intent.

What makes this particularly problematic is performance. Every time you bridge between Swift's native types and Foundation's collection types, there's overhead. When you use [Any], you're forcing runtime type checks throughout your codebase. The migrated app might be noticeably slower than the original, not because Swift is slower, but because the converted code is doing unnecessary bridging and type checking everywhere.

The correct approach requires understanding the data structures:

func parseUserList(_ data: Data) -> [User] {
    guard let json = try? JSONSerialization.jsonObject(with: data) as? [[String: Any]] else {
        return []
    }
    
    return json.compactMap { User(dictionary: $0) }
}

Or ideally, using modern Swift's Codable:

func parseUserList(_ data: Data) -> [User] {
    (try? JSONDecoder().decode([User].self, from: data)) ?? []
}

But determining the correct types requires analyzing how the collections are actually used throughout the codebase. AI tools don't do that semantic analysis. They convert syntax, preserve behavior, and move on. The result is code that works but is neither idiomatic nor performant.

You might see migrated codebases where 80% of collection types are [Any] or [String: Any]. The app works, but it's lost all the benefits of Swift's type system. Every array access, every dictionary lookup requires a cast. The compiler can't help you catch mistakes because everything is dynamically typed at runtime.


Error Handling That Compiles But Doesn't Help

Objective-C's error handling pattern is distinctive: methods that can fail take an NSError ** parameter (a pointer to a pointer), return a BOOL or nil to indicate failure, and populate the error object if something goes wrong. Swift has a completely different philosophy with throws, try, and explicit error types. But AI tools often translate the syntax without translating the concept.

Here's a typical Objective-C method:

- (BOOL)saveDocument:(Document *)document error:(NSError **)error {
    if (![self validateDocument:document]) {
        if (error) {
            *error = [NSError errorWithDomain:@"AppError" 
                                         code:100 
                                     userInfo:@{NSLocalizedDescriptionKey: @"Invalid document"}];
        }
        
        return NO;
    }
    
    return [self.storage writeDocument:document error:error];
}

An AI tool will often convert this to:

func saveDocument(_ document: Document, error: inout NSError?) -> Bool {
    if !validateDocument(document) {
        error = NSError(
            domain: "AppError", 
            code: 100, 
            userInfo: [NSLocalizedDescriptionKey: "Invalid document"]
        )
        return false
    }
    
    return storage.writeDocument(document, error: &error)
}

This is valid Swift. It compiles. It preserves the exact behavior of the Objective-C code. But it's not idiomatic Swift at all. It's Objective-C error handling wearing Swift syntax.

The call sites are particularly awkward:

var error: NSError?
if !saveDocument(myDocument, error: &error) {
    if let error = error {
        print("Failed to save: \(error.localizedDescription)")
    }
}

Compare this to idiomatic Swift error handling:

func saveDocument(_ document: Document) throws {
    guard validateDocument(document) else {
        throw DocumentError.invalid
    }
    
    try storage.writeDocument(document)
}

And the call site:

do {
    try saveDocument(myDocument)
} catch {
    print("Failed to save: \(error.localizedDescription)")
}

The Swift version is clearer, safer, and better integrated with the language. The compiler enforces error handling at call sites. You can't accidentally ignore errors. Type checking works properly with specific error types.

But AI tools don't make this translation because it requires understanding the entire error handling architecture of your application. Which errors are recoverable? Which should be fatal? What's the error hierarchy? These are architectural decisions that require human judgment.

What makes this particularly frustrating is that the converted code works fine. Your tests pass. The app functions correctly. But you've lost all the benefits of Swift's error handling system. You're still writing Objective-C-style error checking by hand, missing the compiler's help, and creating a maintenance burden for anyone who has to work with this code going forward.

In some codebases, every single method that could fail still uses the NSError ** pattern, dozens of methods deep. The migration technically succeeded, but the codebase is no more maintainable than it was in Objective-C. You've changed the syntax without improving the design.


When Strings Aren't What They Seem

String handling is one of those areas where Objective-C and Swift look similar enough to be dangerous. Both have string types, both can concatenate and search, and both handle Unicode. But they handle it differently enough that mechanical conversion creates subtle bugs.

The most common issue involves string length calculations. Consider this Objective-C code:

- (NSString *)truncateString:(NSString *)input maxLength:(NSInteger)maxLength {
    if (input.length <= maxLength) {
        return input;
    }
    
    return [input substringToIndex:maxLength];
}

An AI tool converts this to:

func truncateString(_ input: String, maxLength: Int) -> String {
    if input.count <= maxLength {
        return input
    }
    
    let index = input.index(input.startIndex, offsetBy: maxLength)
    
    return String(input[..<index])
}

At first glance, this looks reasonable. The AI even correctly handled Swift's string indexing, which is more complex than Objective-C's. But there's a subtle bug: NSString.length and String.count measure different things.

NSString.length returns the number of UTF-16 code units. String.count returns the number of extended grapheme clusters (roughly, "visible characters"). For ASCII text, they're the same. For emoji, complex Unicode, or characters with combining marks, they differ.

If the original code was truncating strings for a database column with a UTF-16 length limit, the converted Swift code will behave differently. A string with emoji might be counted as shorter than it actually is in UTF-16 terms, leading to database errors when you try to save it. Or worse, the truncation happens at the wrong point, splitting a multi-byte character and creating invalid Unicode.

Here's what makes this particularly insidious: it works fine for most strings. Your testing with English text passes. The bugs only appear when users enter emoji, when text includes non-Latin scripts or accented characters, or when your app handles text from external sources you didn't anticipate.

The correct approach depends on what the original code was actually doing:

// If you actually need UTF-16 length (for database constraints, legacy protocols):
func truncateString(_ input: String, maxLength: Int) -> String {
    if input.utf16.count <= maxLength {
        return input
    }
    
    let endIndex = input.utf16.index(input.utf16.startIndex, offsetBy: maxLength)
    
    return String(input[..<endIndex])
}

// If you need grapheme cluster count (visible characters):
func truncateString(_ input: String, maxLength: Int) -> String {
    if input.count <= maxLength {
        return input
    }
    
    return String(input.prefix(maxLength))
}

AI tools don't make this distinction because they don't understand why the length was being checked. They see a string operation and convert it mechanically. The result compiles, runs, and appears correct until it encounters the data that exposes the semantic mismatch.

Production issues occur where user-generated content gets corrupted, where search fails to find expected matches, or where string comparisons fail unexpectedly. All because the migrated code treats string operations as interchangeable when they're not.


Core Data's Type Strictness Surprise

Core Data is one of those frameworks that bridges between Objective-C and Swift in complex ways. In Objective-C, Core Data attributes are loosely typed: an integer attribute is accessed as an NSNumber, regardless of whether it's 16-bit, 32-bit, or 64-bit. Swift is stricter. The generated Core Data properties have specific types, and mismatches cause crashes.

Here's a pattern that appears frequently in legacy code:

@interface User : NSManagedObject
@property (nonatomic, strong) NSNumber *age;
@property (nonatomic, strong) NSNumber *score;
@end

// Usage:
user.age = @(25);
user.score = @(1000);

When you migrate this to Swift, the Core Data model generator might create:

class User: NSManagedObject {
    @NSManaged var age: Int16
    @NSManaged var score: Int32
}

Now look what happens with AI-converted code:

// Original Objective-C: user.age = @([self calculateAge]);
// Converted Swift:
user.age = Int16(calculateAge())

This works if calculateAge() returns something that fits in an Int16. But if that method returns an Int, and the value is larger than 32,767, you get a runtime crash from integer overflow. The Objective-C version would have handled this gracefully (or at least not crashed), because NSNumber is dynamically sized.

The AI tool correctly identified that the Core Data property is Int16, and it correctly added the cast. But it didn't analyze whether the source data could exceed that range. It did a mechanical type conversion without understanding the data flow.

What compounds this problem is that Core Data's generated properties depend on your data model file's configuration. If you have an attribute marked as "Integer 16" in your model, Swift generates an Int16 property. If it's "Integer 32", you get Int32. If it's "Integer 64", you get Int64. The AI tool has to guess which one matches your usage, and if the data model and the code assumptions diverge, you get crashes.

Transformable attributes are even worse. Consider:

@property (nonatomic, strong) NSArray *preferences;

In Objective-C, this is stored as a transformable attribute. In Swift, the generated property might be:

@NSManaged var preferences: [Any]

Or it might be:

@NSManaged var preferences: NSArray

Or if someone updated the data model to use a custom transformer:

@NSManaged var preferences: [String]

The AI tool has to guess which one is correct based on usage patterns in the code. If it guesses wrong, you get type mismatches at runtime when Core Data tries to set the value. The code compiles because everything type-checks, but crashes occur when the actual stored data doesn't match the type assumptions.

Migrated apps crash consistently on first launch because the Core Data types don't match the stored data. The old Objective-C version loaded fine because NSNumber and NSArray are flexible. The new Swift version is strict about types, and the AI's assumptions were wrong.

The solution requires careful analysis of the Core Data model against actual usage patterns and potentially stored data in production. You need to verify that every attribute type matches both the model definition and the code's assumptions. It's tedious work that can't be automated because it requires understanding data semantics, not just syntax.


What This Means for Your Migration Strategy

If you're reading through these examples and feeling a sense of dread, that's actually a healthy response. These aren't theoretical edge cases. They are systematic issues that appear in almost every AI-assisted migration because they stem from fundamental differences in how Objective-C and Swift handle runtime behavior.

The pattern is consistent: AI tools convert syntax perfectly, and the code compiles successfully. But compilation success is just the starting gate, not the finish line. The real work is ensuring semantic correctness: that the converted code behaves the same way in production, with real users, real data, and real edge cases.

This is why I advocate for the hybrid approach: use AI tools for the mechanical conversion work they excel at, but pair that with expert review focused specifically on these runtime behavior patterns. Someone who's debugged threading issues knows where to look. Someone who's dealt with Unicode edge cases knows which string operations need scrutiny. Someone who's worked with Core Data migrations knows how to verify type correctness.

The result is code that doesn't just compile but actually works correctly in production. That's worth the investment in expert oversight, because fixing these bugs after shipping costs far more than catching them during migration.


Next Steps

If your codebase has been through an AI-assisted migration and you're seeing mysterious production issues, or if you're considering migration and want to avoid these pitfalls, let's talk. I offer a free 30-minute consultation where we can discuss your specific situation and whether a comprehensive migration assessment makes sense.

For projects that move forward, I provide a detailed paid assessment that identifies these exact runtime behavior patterns in your codebase, along with a migration plan that addresses them systematically.

With 15 years of Apple development experience and as the founder of Cocoacasts, I've analyzed dozens of small and large codebases, and know exactly what to look for.

Let's talk!


In the next post of this series, we'll explore the technical debt that AI migrations create by preserving old patterns instead of embracing modern Swift idioms. Subscribe to get it in your inbox when it's published.