Why are emoji characters like πŸ‘©β€πŸ‘©β€πŸ‘§β€πŸ‘¦ treated so strangely in Swift strings?

The character πŸ‘©β€πŸ‘©β€πŸ‘§β€πŸ‘¦ (family with two women, one girl, and one boy) is encoded as such:

U+1F469 WOMAN,
‍U+200D ZWJ,
U+1F469 WOMAN,
U+200D ZWJ,
U+1F467 GIRL,
U+200D ZWJ,
U+1F466 BOY

So it’s very interestingly-encoded; the perfect target for a unit test. However, Swift doesn’t seem to know how to treat it. Here’s what I mean:

"πŸ‘©β€πŸ‘©β€πŸ‘§β€πŸ‘¦".contains("πŸ‘©β€πŸ‘©β€πŸ‘§β€πŸ‘¦") // true
"πŸ‘©β€πŸ‘©β€πŸ‘§β€πŸ‘¦".contains("πŸ‘©") // false
"πŸ‘©β€πŸ‘©β€πŸ‘§β€πŸ‘¦".contains("\u{200D}") // false
"πŸ‘©β€πŸ‘©β€πŸ‘§β€πŸ‘¦".contains("πŸ‘§") // false
"πŸ‘©β€πŸ‘©β€πŸ‘§β€πŸ‘¦".contains("πŸ‘¦") // true

So, Swift says it contains itself (good) and a boy (good!). But it then says it does not contain a woman, girl, or zero-width joiner. What’s happening here? Why does Swift know it contains a boy but not a woman or girl? I could understand if it treated it as a single character and only recognized it containing itself, but the fact that it got one subcomponent and no others baffles me.

This does not change if I use something like "πŸ‘©".characters.first!.


Even more confounding is this:

let manual = "\u{1F469}\u{200D}\u{1F469}\u{200D}\u{1F467}\u{200D}\u{1F466}"
Array(manual.characters) // ["πŸ‘©β€", "πŸ‘©β€", "πŸ‘§β€", "πŸ‘¦"]

Even though I placed the ZWJs in there, they aren’t reflected in the character array. What followed was a little telling:

manual.contains("πŸ‘©") // false
manual.contains("πŸ‘§") // false
manual.contains("πŸ‘¦") // true

So I get the same behavior with the character array… which is supremely annoying, since I know what the array looks like.

This also does not change if I use something like "πŸ‘©".characters.first!.

6 s
6

Leave a Comment