Beginner’s Guide to Using Microsoft Cognitive Services Face API: Part 2

Part 1 was written because I couldn’t find a direct route to get the subscription keys required to use the MCS Face API service.
This tutorial exists because I found myself bouncing around the documentation, developer forums, and Stack Overflow for my project. I streamlined the starter code to accomplish my specific task of detecting a smile in an image of a Kiva borrower.
My development setup for this part of the project was Python 3.6 using a Juptyer Notebook. Why? I wanted: to be able to see my images quickly and in line with my code, to execute code line-by-line, and test code function-by-function. I do switch to a text editor like Sublime Text in the next tutorial when I move to batch processing multiple images. I’ll leave exploring the rabbit hole of installing these to you, but my quick advice is use the Anaconda distribution.
This tutorial was very useful in helping me to install and to analyze an image on a website using Python. The Microsoft api and documentation supports nearly all the other popular web development languages as well. Hopefully it’ll work for you!
Step 3: Install necessary python libraries
For Python, you’ll need to install the requests library with pip
pip install requestsMy data set of images were stored locally on my laptop. The currently available Microsoft documentation isn’t very clear about how to upload pictures using its service. I mined Stack Overflow, so you don’t have to. I also found the some of the import statements in the sample code confusing and rely on the Requests: HTTP for Humans, which is recommended by Python 3 documentation.
Step 4: Tweak the documentation’s sample code
Starting with an example is where most programmers begin their process; they find sample code and switch out parts as needed. Most of the following code is from the official Microsoft documentation with a few tweaks for local file upload and some added explanation in the comments. Here’s what worked for me:
#!/usr/bin/env python####################################
# File name: detect_face.py #
# Author: me fab #
####################################""" Python 3.6 script that opens a locally stored image file and
passes the binary to the Microsoft Face API for image detection analysis and displays the json response """__author__ = "me fab"
__license__ = "BSD"
__version__ = "3.6"
__status__ = "Prototype" # import necessary libraries, you need to have previously installed # these via pip
import requests# Replace 'KEY_1' with your subscription key as a stringsubscription_key = 'KEY_1'filename = 'path-to/yr-image-filename.jpg'# Replace or verify the region.
#
# You must use the same region in your REST API call as you used to obtain your subscription keys.
# For example, if you obtained your subscription keys from the westus region, replace
# "westcentralus" in the URI below with "westus".
#
# NOTE: Free trial subscription keys are generated in the westcentralus region, so if you are using
# a free trial subscription key, you should not need to change this region.
uri_base = 'https://westcentralus.api.cognitive.microsoft.com'# Request headers
# for locally stored image files use
# 'Content-Type': 'application/octet-stream'headers = {
'Content-Type': 'application/octet-stream',
'Ocp-Apim-Subscription-Key': subscription_key,
}# Request parameters
# The detection options for MCS Face API check MCS face api
# documentation for complete list of features available for
# detection in an image# these parameters tell the api I want to detect a face and a smile
params = {
'returnFaceId': 'true',
'returnFaceAttributes': 'smile',
}# route to the face api
path_to_face_api = '/face/v1.0/detect'# open jpg file as binary file data for intake by the MCS api
with open(filename, 'rb') as f:
img_data = f.read()try:
# Execute the api call as a POST request.
# What's happening?: You're sending the data, headers and
# parameter to the api route & saving the
# mcs server's response to a variable. # Note: mcs face api only returns 1 analysis at time
response = requests.post(uri_base + path_to_face_api,
data=img_data,
headers=headers,
params=params)
print ('Response:') # json() is a method from the request library that converts
# the json reponse to a python friendly data structure
parsed = response.json()
# display the image analysis data
print (parsed)
except Exception as e:
print('Error:')
print(e)
Try tweaking this script to add your own subscription key and your own image file name.
Step 5: Run the code
If you’re working in a jupyter notebook, writing the code above into a cell and pressing <shift>+<return> will run the code and produce the output in an output cell.
If you’re working in a text editor. Save the file to a folder on your computer as
detect_face.py # or use your own filename conventionopen a terminal or a bash shell and run your python program:
my-computer: face-detect-project-folder$ python detect_face.pyA successful response would look something like JSON file displayed in the documentation.
Debugging tips:
If the parsed response is an empty list, the response may look like this:
Response:
[]The api did not detect a face in the image. You might want to try a different image or try the online face detection tool to check your image.
Try it out! The api can return a new thumbnail image, and other types of data that I’m not covering here. You’ll have to read the documentation, maybe struggle a little, but I’m sure you’ll figure the other stuff out via Google and Stack Overflow. I hope this tutorial gives you a head start, and you can have some fun with a powerful computer vision tool!
You can stop here and keep playing with the Microsoft Cognitive Services Face API. If you want to go further in this adventure, you can head for part 3 and learn batch processing more than a single image at a time.





