SiriKit For Payments – Access Account Balance

This article demoes how to connect your iOS financial app to Siri so that user can ask for the account balance information with their voice.

OverView:

SiriKit supports user interaction with the app in 11 domains (as of today) like Messaging, Workouts, Payments, CarPlay, Photos, VOIP calling etc.

The type of requests user can make are categorized into intents and related intents are grouped into domains.

All the user requests with Siri are communicated to the intent App extensions. Intents should be to make API calls on service frameworks and respond back to user. Below picture the idea that we are going to develop in this article.

 

 

 

 

 

Since Siri communication is interactive, it’s not necessary that intent gets all the necessary details of the request at first go. For Example, if you ask Siri “what is account balance?, we still need more information about which account balance user is expecting, So Siri should be able to question back what type of account user is looking for. When user responds with account type as Checking or Saving account, intent should be in a position to handle the request or ask back Siri for any clarification, confirmation.

To search for balance, SiriKit provides us with INSearchForAccountsIntent class and we need to add functionality to adopt to INSearchForAccountsIntentHandling protocol.

Adopting INSearchForAccountsIntentHandling requires handling methods to cover 3 below topics.

  • Handling the Intent – Called when it is time to search for the account information.
  • Confirming the Response – Called when it is time for you to confirm whether you can perform the search.
  • Resolving Details of the Intent – Called when you need for clarification to complete the request.

Getting Started:

Create SiriKitDemo App –

Let’s create a simple iOS Single View App and name it SiriKitDemo. Create UILabels for showing amount balance for both Checking and Saving accounts. Add IBOutlets as shown below.

 

 

 

 

 

 

 

 

 

 

In info.plist, add key “Privacy – Siri Usage Description”  with the description to take user permission to access Siri.

Working on SiriKit requires Siri capability enabled. First create App ID on Apple developer site using  the same Bundle Id you see in Xcode. Ensure that “SiriKit”  service is selected.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now in Xcode, go to project target (SiriKitDemo), capabilities tab and enable Siri option. If you don’t see Siri option, close Xcode and reopen and you should Siri option.

Create App service Framework –

Now, lets create Payments Service framework that can make Service API calls to fetch account balance info. Create a new target and select Cocoa Touch Framework template and name it PaymentsFramework.

Now add new Swift File “AccountBalance.swift” and create a singleton class AccountsBalance. This class is responsible is for making service calls to fetch the balance amount.

Here, I created an enum  AccountType and created a service method getAccountBalance that can take account type as parameter and return the amount. I am just returning the hardcoded amounts but in real projects, this is where service calls are made and amount is returned in completion handler.

Now, let’s use getAccountBalance API in the app. Go to ViewController.Swift in SiriKitDemo project. Import the just created framework with “import PaymentsFramework”.  Create loadBalance() function and call it in ViewDidLoad().  Here, we call the API from the framework and populate the UILabels for Checking and Savings. Run the app to ensure you see fetched amounts.

 

Create Intent Extension:

Add new target and select Intents Extension template and name it PaymentsIntents. Open the IntentHandler.swift and see you all the sample code provided for messaging domain. We don’t need any of that code.

In PaymentsIntents info.plist, add INSearchForMessagesIntent entry as shown below to let Siri aware of the intents the app uses.

 

 

 

Handle Siri Requests:

Create new swift file in Intent extension and name it SearchBalanceIntentHandler.swift. Create a class SearchBalanceIntentHandler that extends INSearchForAccountsIntentHandling and NSObject protocol. This class is responsible for handling and resolving the voice requests. Also import  PaymentsFramework.

Let’s add code to complete handle(intent:completion:) and resolveAccountType(for:with:) methods.

resolveAccountType(for:with:) –  When user makes a request for account balance, Intent calls resolveAccountType(for:with:)  method passing the request information in intent parameter. It is up to us to check intent account type and callback completion handler.

In  resolveAccountType method, intent object has property accountType of type INAccountType enum. It can be one of checking, credit, debit, investment, mortgage, prepaid, saving, unknown type.  If user asks say something like “show my checking balance” or ”show my saving balance”,  we have all the necessary information and call completion handler with resolution result as success with account type. Otherwise, we pass resolution result as needsvalue so that Siri can ask back user for the type of account for which balance is needed.

When we call  resolveAccountType completion with INAccountTypeResolutionResult.success, control goes to handle(intent:completion:) to process the request and send back the result to the user.

handle(intent:completion:) – In this method, we make service API call to fetch the account balance and  callback the completion handler with the service status INSearchForAccountsIntentResponse response. If successful, we send the balance amount in INBalanceAmount object.

Below is summary of functionality in handle(intent:completion:)  method:

  1. Check for intent.accountType to see if it either Checking or Saving.
  2. Make service call to fetch the amount by passing the account type as parameter AccountsBalance.shared.getAccountBalance(type: type) { (amount) in {}
  3. If service call failed, we send the response fail completion handler
        completion(INSearchForAccountsIntentResponse(code: INSearchForAccountsIntentResponseCode.failure, userActivity: nil))
  4. If service call is successful and received the amount.
    1. Create INBalanceAmount object with the balance amount with its currency code.
    2. Create INPaymentAccount object with INBalanceAmount as parameter along with INSpeakableString which is displayed to user in Account column.

Now go to PaymentsIntent target, Build Phases and ensure you see SearchBalanceIntentHandler.swift in the compiler sources. If not there, just add it by clicking the plus button and selecting the file.

 

 

 

 

 

 

 

 

 

Last thing to do is to go to IntentHandler class and return SearchBalanceIntentHandler instance if the intent is of INSearchForAccountsIntent

Now, we are done with the coding. Below is how to test it.

  1. On Device – Install the app on to device, go to Siri and ask for a question like “What is checking accounts balance”.
  2. On Simulator – Go to PaymentsIntent Edit scheme, add the query “What is checking accounts balance” in Siri Intent Query and select “Ask on Launch” in Executable.  

 

 

 

 

 

 

 

 

Here is the working SiriKitDemo output.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

For any questions, please feel free to reach out. I will be glad to answer.

UIAccessibility Tutorial – Part 1

UIAccessibility is a set of methods that provide assistive information for the users with visual disabilities. It lets users access information about the views and controls in UI and let them take actions like UIButton tap etc, also helping to navigate through the app.

Accessibility is the most ignored part of the app development, but please remember any app we create is meant for all sections of users and code needs to be written to let the visual impaired users to experience your app. And I believe every responsible company should make their apps accessible for the users with the special needs.

One example of assistive functionality is VoiceOver which we will go in detail in this article. VoiceOver can be enabled in Settings->General->Accessibility->Voiceover.  VoiceOver (VO) can also be accessed via shortcut by tapping home button 3 times on non-iPhone X devices or by clicking right button 3 times on iPhone X.

Accessibility can be set either on UI InterfaceBuilder or in code. First part of this article covers the basics of UIAccessibility and how to add accessibility to simple views. Second part covers the accessibility for complex views.

Basics:

accessibilityLabel,  accessibilityHint, accessibilityValue, accessibilityTraits:

These are the four basic properties that define the accessibility of the view. Just for demonstration purpose, I created the below view that First Name and Last Name UILabels, first name and last name UITextFields, submit UIButton

 

 

 

 

And below is the func that I called in viewDidLoad(). 

 

Run the app on the simulator and open the tool “Accessibility Inspector” tool from Xcode-> Open Developer Tool. This tool lets tool let you see the accessibility properties and take action as needed. Here in the screenshot that shows the various accessibility sections. Select the highlighted button in the inspector and then select the UI element on the simulator to see its properties.

 

 

 

 

 

 

 

 

 

 

 

When run on the iPhone device and VO activated – VO highlights the frame of each element and you can navigate the app from top left to bottom right (by swiping right, left) in the sequence of the elements laid on the UI. The sequence can be altered which will be discussed later.

The VO reads out in the order below for each element:

  1.  accessibilityLabel
  2. accessibilityValue
  3. accessibilityTraits 
  4. accessibilityHint
  5. default actions of the UIControl or the custom action defined programatically.
  • First property that VO reads out is accessiblityLabel. In this example, “First Name” being a UILabel, its reads “First Name” as default. If you want to read the custom value, just change the value of accessiblityLabel.
  • accessibilityValue is the value of the UI element. In this example, textfield’s default accessibilityValue is the text that is entered. If you need to read the value that is different from what is entered, give the custom value to accessibilityValue.  firstNameTextField.accessibilityValue = “Fixed Value“ reads  “Fixed Value” as the accessibilityValue no matter what the text user entered.
  • accessibilityTraits defines the type of the data UI element holds. UIAccessibilityTraits is the enum that holds the possible values from the below screenshot. UILabels being the static data, it’s  accessibilityTraits property is assigned to .staticText. “Submit Button” is assigned the .button value. There are often scenarios where we need a UIView element that is tappable (with UITapGestureRecognizer added) and behave as a button. For accessibility, just assign view’s accessibilityTraits value to button. VO read out the view to the user as a button suggesting the tap action.

 

 

 

 

 

 

 

 

 

 

 

  • accessiblityHint as the name suggests is the hint that we can provide the user about the UI element. Say in our case, when VO highlights textfield – we can explicitly read out to them the context, use of the textfield and what it is intended for.
  • Last accessibility item that VO reads is the action that can be taken on the UI element. It is inferred from accessibilityTraits. For accessibilityTraits button, the default action that is taken is to double tap to execute the tap action. For the custom view, we can define our own custom actions like swipe up, swipe down etc. The details of this topic will be covered in part two of this article.

Feel free to let me know in comments if you need any help on implementing the accessibility in your project. Will be happy to help !

FaceId authentication in iOS11, Swift 4

iPhone X released on 11/03/2017 and the most intriguing feature is FaceId authentication. How about adding FaceId authentication for login to our own app?. This article covers code implementation for it. Let’s dive in!

Complete movie demo of the code is uploaded here. All the code in this article can be downloaded / cloned from github.
Here is the screenshot of the output of the project explained in this article.

Overview

Apple provides the LocalAuthentication framework from iOS 8 for touch Id authentication. Face Id uses the same framework.

The LocalAuthentication framework provides facilities for requesting authentication from users with specified security policies. LocalAuthentication automatically provides the interface for evaluating authentication policies and access controls, managing credentials, and invalidating authentication contexts with LAContext

Evaluating Authentication Policies:

LAContext object provides the below methods to check device capability for biometric authentication and to do the actual authentication.

    1. canEvaluatePolicy : This method returns true if device is ready for biometric authentication. Otherwise, it returns false and populates the NSError with the error code.

       
    2. evaluatePolicy:  If canEvaluatePolicy returns true, we can call evaluatePolicy that shows interface for biometric authentication along with multiple prompt messages for retry option. The reply completion handler returns arguments (success,error) which can be used to determine success or the failure error code.

User Authentication Prompts:

LAContext provides below 3 string properties to prompt the user

var localizedReason: String -  The localized explanation for authentication shown in the dialog presented to the user.

var localizedFallbackTitle: String? -  The localized title for the fallback button in the dialog presented to the user during authentication.

var localizedCancelTitle: String? - The localized title for the fallback button in the dialog presented to the user during authentication.

Here is what see so far.

 

Authentication Fail Error Reason

When authentication fails either at canEvaluatePolicy or at evaluatePolicy, error code is that is returned can be used to fail reason.

Below is the complete function that maps the error reason for each error code.

The GitHub link for this project has MTBiometricAuthentication class that you can add to your project. Just create an instance of it,  add notification observer as below.

Now call authenticationWithBiometricID() function and app will notified of success or fail reason via post notification.  You are all set with biometric authentication.  After evaluating authentication, app will be notified with success or fail reason in post notification handler as in below code.

For any questions  on this topic, drop a comment and I will be glad to respond. Happy coding!

ARKit detecting planes and placing objects

This Article covers the ARKit plane detection and placing the objects on the plane. It’s written in Swift 4 in Xcode 9 beta version.

Here is the screenshot of the output of the project explained in this article.

Complete movie demo of the code is uploaded here. All the code in this article can be downloaded / cloned from github.

Little Introduction:

ARKit is the iOS framework for Augmented Reality. ARKit using the built-in camera, powerful processors and motion sensors in iOS devices to track the real world objects and let virtual objects blend in with real world environment. ARKit uses Visual Inertial Odometry (VIO) to accurately track the world around it. It supports Unity, Unreal, and Scenekit to display AR content

Before diving into coding, here is the brief description of objects to know.

  1. ARSession : This class configures and runs various AR techniques on devices.  It reads the objects through the cameras using the motion sensing techniques and provides the session instance for every AR experience built with ARKit.
  2. SessionConfiguration:  ARSession instance runs the session configuration which is either ARSessionConfiguration or it’s subclass ARWorldTrackingSessionConfiguration. This configuration determines how the device’s position and motion is tracked in the real world.
    1.      ARSessionConfiguration: It provides the basic configuration that detects only devices orientation
    2.      ARWorldTrackingSessionConfiguration:  It provides the tracking for real-world surfaces and it position while reading the device motion through the camera. It currently provides only horizontal plane /surface detection.
  3.  Views: ARKit  provide ARSCNView to display 3D SceneKit content and ARSKView to display 2D SpriteKit content
  4. ARAnchor:  Every object which is node (either SceneKit node or SpriteKitNode) is tagged with ARAnchor object that tracks the real world positioning and orientation.  ARPlaneAnchor is the subclass of ARAnchor used to track the real-world flat surfaces (currently ARKit supports only horizontal surfaces). This object holds the width, length and center of the plane.
  5. ARSCNViewDelegate:  This protocol provides various methods to receive the   captured images and tracking information. It calls the delegate methods passing ARAnchor object whenever plane is detected, frame size is updated, node is deleted etc. More details provided as we go through code.

Time to code: 

It’s certainly easy to understand the objects discussed above when we see the code. Enough of the theory!!

setup:

Open the project code in Xcode. You need atleast Xcode 9 beta (latest at this time of writing the article) to run the project successfully.  Please note ARKit does not work on iOS simulators, it runs only on iOS devices with  A9 or higher processing chip.

Setup the ARSCNView just like SCNView in SceneKit. Attach SCNScene instance to ARSCNView.

ARSCNDebugOptions showFeaturePoint, showWorldOrigin in the above code displays the points on the real world plane that it tries to detect. More the feature points, easier it is for ARKit to identify the horizontal planes.

Configuration:

Create an instance of ARWorldTrackingSessionConfiguration and set its plane  detection to ARWorldTrackingSessionConfiguration.PlaneDetection.horizontal . Let sceneView session to run on the configuration with the ARSession.RunOptions.resetTracking.

Plane Detection:

As you point the camera to the horizontal surface, it shows the surface feature points trying to identify the plane. Once it has enough feature points, it recognizes the horizontal surface area and calls below method.


(nullable SCNNode *)renderer:(id )renderer nodeForAnchor:(ARAnchor *)anchor;

Method implementation for this code provided below.

ARAnchor object holds all the coordinates necessary to create the SCNNode and tag the real world anchor coordinated to the virtual object created in SceneKit. First typecast ARAnchor to determine if that object is an ARPlaneAnchor. If typecasting is successful, create planeAndhor object. planeAnchor.extent and  planeAnchor.center gives the width, length and the center of the plane. This coordinates can be used to create SCNFloor, SCNPlane or and SCNNode that you want to act as a horizontal base. In the code example, we created SCNBox geometry node. Store this anchor as current anchor for any future interaction on it.

Plane Update:

As more features are identified for the ARPlaneAnchor , ARKit  sends new anchor coordinates to below method. It’s upto us whether are interested in updated frame of horizontal surface or not.


(void)renderer:(id <SCNSceneRenderer>)renderer didUpdateNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor

Code implemented for update method is provided below.

At any moment, we can get the SCNNode tied the specific ARAnchor and vice-versa using below method calls.

ARHitTestResult:

Any time we want to get the ARAnchor that user touches, we can just do a hitTest on the SceneView on the touched location. This gives an array of ARHitTestResult objects. Get the first result from the array and read the “identifier” property of the ARPlaneAnchor object. This “identifier” property uniquely identifies every anchor.

Place the object:

After you get the first object in the ARHitResult array, it’s worldTransform columns will provide the real world coordinates of the touch location. Just create a SCNNode of your geometry and set it’s position to those columns values. Add that node to scene view. You are done !!

How exciting is the ARKit!. For any questions, comments, let me know. Happy coding !!!


 

Pan drag UIView on UIBezierPath curve


This article covers the one of the solution to pan the UIView along the UIBezierPath curve with UIPanGestureRecognizer. It’s code written in Swift 3, Xcode 8.3.2

Let’s look at the output before going into details.

https://machinethinks.com/wp-content/uploads/2017/07/output.gif
output

 

 

 

 

 

 

 

 

 

 

 

Requirement:

There are several examples you can find on stackoverflow on how to animate the UIView on the UIBezierPath. But there are not many examples that show how to move the object along the path with user touch, pan interactions. So here, we discuss the possibility of it.

Limitation:

This code works with the assumption that the bezier curve Path flows in a linear curve. i,e there is continuous increase or decrease  in either X or Y axis while drawing the curve. The output.gif show the sample curve that has continuous increase from start to finish  on Y axis.

How to code:

Let’s draw the UIBezierPath with QuadCurve which takes start point, end point and the control point.

1) Created Quad curve and stored the start point p0, end point p2 and control point p1. We called drawBezierPath()  in viewDidLoad()

bezierPath.move(to: p0)

bezierPath.addQuadCurve(to: p2, controlPoint: p1)

2) emojiView is the view that we are going to drag along the UIbezierPath.

3) In viewDidAppear – we set the emojiView center to the starting point the curve and also store the starting emojiView position.

4) Add the UIPanGestureRecognizer to the view.

5) Now comes the points calculation part. If it takes 1 unit time to draw the beziere path. Below method gets the (x,y) point at the specific time interval between 0 & 1.  Big thanks to Eric Sudan for the idea @ http://ericasadun.com

This method needs to be called twice one for x point and again for y point.

6) Here is the final part on how to drag

First we get the location of the pan in the view. Then calculate the distanceYInRange based on the Y difference of the panned location and the starting Y location. We can obtain the (X,Y) on the bezier curve using getPointAtPercent method. Las step is to change position to emojiView to new location.

The complete code in this article is available at github. For any questions, feel free to comment. Happy coding!!


 

Drag and Drop in UICollectionView in iOS 11

What’s New:

iOS 11 introduced the drag and drop feature that lets you drag , drop the content from one application to other application on iPad. This article provides the high level overview with example code on this to drag and drop the UICollectionViewCells in UICollectionView.

Click this see the drag and drop video in action for the code in the article.

Delegates:

  • dragDelegate – This delegate manages the dragging of items from collection view.
  • dropDelegate – This delegate manages the dropping of items to collection view

Protocols:

  • UICollectionViewDragDelegate – This protocol requires the implementation of below method that provides the item to be dragged

  •  UICollectionViewDropDelegate – This protocol requires implementation of below method that holds the items being dropped to CollectionView.

 

Example:

Let’s explore the drag and drop feature by creating a collection view that holds images. Drag and drop allows the images from other applications (like photos..etc) to be added to our collectionView by dragging and vice-versa in other direction for dropping

Steps:

  1. Implement the CollectionView with each cell holding some image.
  2. Set the drag and drop delegates to the object that provides the implementation of UICollectionViewDragDelegate, UICollectionViewDropDelegate. In this example, we are setting the delegates to self.

collectionView.dragDelegate = self

collectionView.dropDelegate = self

  1. Implement the both the new delegates. Code provided below with explanation following it.

Explanation:

Drag:

collectionview(_:itemsForBeginning:at:)  This method expects the UIDragItem to identify the content that needs to be carried while dragging the cell.

This above method is enough to allow the image to be dragged from UICollectionViewCell to any other application to which image can be dropped

Drop:

collectionView(_:performDropWith:)  This method has the implementation of capturing the image dropped by other application and creating the new UICollectionViewCell and add the image to it.

This article provides the basic introduction to drag and drop feature. Complete code can be obtained from GitHub link  Let me know in comments for any queries. Happy coding!!


 

What’s New In Swift 4

This article is to highlight the key features added/ modified in Swift 4. Apple scheduled to release Swift 4 in fall 2017 and its beta versions are already available for download for developers.

Downloading Swift 4 Snapshot

Swift 4.0 Snapshots are prebuilt binaries that are automatically created from swift-4.0-branch branch. Latest snapshot package downloaded from Swift 4.0 Development section from here.

Run downloaded installer file to install the snapshot. Now goto Xcode->Toolchains and select the snapshot as in below screenshot.

There is nothing more to do; you are all set to play with Swift 4. Create new playground file and test it out yourself all the new Swift 4 features.

Here, I covered the important new/modified features and I will keep adding here as I get more details

Strings

String is a collection

Strings are now collections in Swift 4 like it was in Swift 2. This allows iterating though characters in string. Just like any other collection, string can reversed, apply map() and flatmap() !

Multi-line string literals

Swift 3 requires line break (\n) to write multi-line strings. In Swift 4 introduces triple quotes (“””) to start and end of the multiline string. It lets use quote marks without escaping.

The indentation of the closing delimiter determines how much whitespace is stripped from the start of each line.

Substring is new type

Swift 4 introduced Substring type to represent the String slices.

The need for introducing Substring is to optimize memory for Strings. I will create a separate article on memory management and talk about Substring there.

Unicode 9 Characters

Each String emoticon is now 1 character in Swift 4.