Would reality really be complete without our beloved four-legged friends? Certainly not. Luckily the latest update to Apple’s ‘Vision’ framework—which gives developers a bunch of useful computer vision tools for iOS and iPad apps—includes the ability to identify and track the skeletal position of dogs and cats.
At Apple’s annual WWDC the company posted a session introducing developers to the new animal tracking capabilities in the Vision developer tool, and explained that the system can work on videos in real-time and on photos.
The system, which is also capable of tracking the skeletal position of people, gives developers six tracked ‘joint groups’ to work with, which collectively describe the position of the animal’s body.
Tracked joint groups include:
- Head: Ears, Eyes, Nose
- Front Legs: Right leg, Left leg
- Hind Legs: Right rear leg, Left rear leg
- Tail: Tail start, Tail middle, Tail end
- Trunk (neck)
- All (contains all tracked points representing a complete skeletal pose)
Yes, you read that right, the system has ‘tail tracking’ and ‘ear tracking’ so your dog’s tail wags and floppy ears won’t be missed.
The system supports up to two animals in the scene at one time and, in additional to tracking their position, can also identify a cat from a dog… just in case you have trouble with that.
Despite the similarity in name to the Vision Pro headset, it isn’t yet clear if Apple will expose the ‘Vision’ computer vision framework to developers of the headset, but it may well be the same foundation that allows the device to identify people in the room around you and fade them into the virtual view so you can talk to them.
That may have also been a reason for building out this animal tracking system in the first place—so you don’t trip over fido when you’re dancing around the room in your new Vision Pro headset—though we haven’t been able to confirm that system will work with pets just yet.