I have come across an interesting example of AR developer's experience in the context of edge computing. Please see following publication of Seong-Jik Kim from KBS. https://www.linkedin.com/feed/update/urn:li:activity:6555928782660825088/
On the one hand Kim has shown how capable new phones can be, on the other hand his experiment clearly shows how the edge computing could improve user experience. Adding additional features like placement of shadows of the objects or understanding the position and the nature of the movement (like presented jump) requires a lot of context awareness eg. understanding the position of source of lights and physical space (the box or the steps to move up and down). You need to process in the realtime very complex digital twin of the surrounding. This can be hardly done on the phone, moving it to the edge is the only logical step.