Hello, I am attempting to create an experience that will contain two physical sets being constructed and then overlaid with digital content using ARKit. I have been looking at your platform as a potential system in order to help orient users, the key is to make sure that the digital content is aligned with the physical sets as close as possible. I would ideally like to avoid having to use marker tags and instead try vision processing such as coreML in order to display content. Unfortunately the sets will be developed off site and I will not have access to the complete designs until very close to the launch. So training anchor points and objects will be tricky and could present issues with changes in lighting. So I am hoping to have a system in place that will let let me guarantee that users will always be mapped into the space correctly. If I was to use the new beacon indoor tracking sdk in order to triangulate position will I be able to locate multiple iOS devices inside of a 20’x20’ space accurately enough to overlay content that needs to be positional relevant? This includes orientation of the scene enough to overlay schematics onto of actual structures. I will have accurate renderings of the space ahead of time to develop to.
I would love any advice or insights into this and if there are any existing demos of ARKit and Estimotes working in unison for spatial awareness I would very much like to see them. I have only seen general marketing pieces on it and no actual hardware demos that prove that they can get to within 30cm accuracy.
I appreciate any help you can offer.