ARKit detecting planes and placing objects

This Article covers the ARKit plane detection and placing the objects on the plane. It’s written in Swift 4 in Xcode 9 beta version.

Here is the screenshot of the output of the project explained in this article.

Complete movie demo of the code is uploaded here. All the code in this article can be downloaded / cloned from github.

Little Introduction:

ARKit is the iOS framework for Augmented Reality. ARKit using the built-in camera, powerful processors and motion sensors in iOS devices to track the real world objects and let virtual objects blend in with real world environment. ARKit uses Visual Inertial Odometry (VIO) to accurately track the world around it. It supports Unity, Unreal, and Scenekit to display AR content

Before diving into coding, here is the brief description of objects to know.

  1. ARSession : This class configures and runs various AR techniques on devices.  It reads the objects through the cameras using the motion sensing techniques and provides the session instance for every AR experience built with ARKit.
  2. SessionConfiguration:  ARSession instance runs the session configuration which is either ARSessionConfiguration or it’s subclass ARWorldTrackingSessionConfiguration. This configuration determines how the device’s position and motion is tracked in the real world.
    1.      ARSessionConfiguration: It provides the basic configuration that detects only devices orientation
    2.      ARWorldTrackingSessionConfiguration:  It provides the tracking for real-world surfaces and it position while reading the device motion through the camera. It currently provides only horizontal plane /surface detection.
  3.  Views: ARKit  provide ARSCNView to display 3D SceneKit content and ARSKView to display 2D SpriteKit content
  4. ARAnchor:  Every object which is node (either SceneKit node or SpriteKitNode) is tagged with ARAnchor object that tracks the real world positioning and orientation.  ARPlaneAnchor is the subclass of ARAnchor used to track the real-world flat surfaces (currently ARKit supports only horizontal surfaces). This object holds the width, length and center of the plane.
  5. ARSCNViewDelegate:  This protocol provides various methods to receive the   captured images and tracking information. It calls the delegate methods passing ARAnchor object whenever plane is detected, frame size is updated, node is deleted etc. More details provided as we go through code.

Time to code: 

It’s certainly easy to understand the objects discussed above when we see the code. Enough of the theory!!


Open the project code in Xcode. You need atleast Xcode 9 beta (latest at this time of writing the article) to run the project successfully.  Please note ARKit does not work on iOS simulators, it runs only on iOS devices with  A9 or higher processing chip.

Setup the ARSCNView just like SCNView in SceneKit. Attach SCNScene instance to ARSCNView.

ARSCNDebugOptions showFeaturePoint, showWorldOrigin in the above code displays the points on the real world plane that it tries to detect. More the feature points, easier it is for ARKit to identify the horizontal planes.


Create an instance of ARWorldTrackingSessionConfiguration and set its plane  detection to ARWorldTrackingSessionConfiguration.PlaneDetection.horizontal . Let sceneView session to run on the configuration with the ARSession.RunOptions.resetTracking.

Plane Detection:

As you point the camera to the horizontal surface, it shows the surface feature points trying to identify the plane. Once it has enough feature points, it recognizes the horizontal surface area and calls below method.

(nullable SCNNode *)renderer:(id )renderer nodeForAnchor:(ARAnchor *)anchor;

Method implementation for this code provided below.

ARAnchor object holds all the coordinates necessary to create the SCNNode and tag the real world anchor coordinated to the virtual object created in SceneKit. First typecast ARAnchor to determine if that object is an ARPlaneAnchor. If typecasting is successful, create planeAndhor object. planeAnchor.extent and gives the width, length and the center of the plane. This coordinates can be used to create SCNFloor, SCNPlane or and SCNNode that you want to act as a horizontal base. In the code example, we created SCNBox geometry node. Store this anchor as current anchor for any future interaction on it.

Plane Update:

As more features are identified for the ARPlaneAnchor , ARKit  sends new anchor coordinates to below method. It’s upto us whether are interested in updated frame of horizontal surface or not.

(void)renderer:(id <SCNSceneRenderer>)renderer didUpdateNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor

Code implemented for update method is provided below.

At any moment, we can get the SCNNode tied the specific ARAnchor and vice-versa using below method calls.


Any time we want to get the ARAnchor that user touches, we can just do a hitTest on the SceneView on the touched location. This gives an array of ARHitTestResult objects. Get the first result from the array and read the “identifier” property of the ARPlaneAnchor object. This “identifier” property uniquely identifies every anchor.

Place the object:

After you get the first object in the ARHitResult array, it’s worldTransform columns will provide the real world coordinates of the touch location. Just create a SCNNode of your geometry and set it’s position to those columns values. Add that node to scene view. You are done !!

How exciting is the ARKit!. For any questions, comments, let me know. Happy coding !!!


Passionate about learning new things. Loves coding and problem solving. Built apps from scratch on iOS platform with Swift, Objective – C,  HTML 5 / javascript, Cardova, Xamarin using both MVC and MVVM. Coded extensively in .Net and Database technologies before moving to mobile development.

Spends free time playing with my kid and watching TV.