Case Study 2: Introduction

Google’s Cloud Vision API allows users to feed images to a pre-trained artificial intelligence (AI), which can perform a variety of tasks such as content labeling and optical character recognition. As with many AIs, the algorithms driving Vision are opaque and the labels it assigns are based on unknown machine learning image sets. Yet AIs are becoming an increasingly significant audience for human behavior. Posts to social media create data for the machine gaze. In November 2018, the Washington Post reported on a company called Predictim selling access to an AI algorithm used to analyze and judge the social media presence of potential babysitters and other caretakers (after a backlash the company is no longer in market).

Not only individuals create social identities. In this project, I turn Google Cloud Vision’s AI gaze on the official New York City presentation of self (to borrow Erving Goffman’s term) through the Instagram social media channel. In keeping with the spirit of Data TRIKE, I investigate here how the manipulation of data inputs (in this case image resolution) affects the AI’s interpretation of the image contents.

I chose the NYC instagram account to lay groundwork and test methods for a potential future project which would use artificial intelligence content labeling on a more extensive set of governmental social media images. I chose Google Cloud Vision due to the accessibility of its API (the interface through which one can submit queries to the AI) and sample code provided in the documentation.

 

Continue on to the next section

Return to table of contents

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *