top of page

The consultation call for this case started how they typically do. The attorney tells us how an employee allegedly jeopardized their company’s reputation. I was expecting the mundane, perhaps a problematic email sent to a customer or a X (formerly known as Twitter) rant that touched upon a third rail.


However, when we got down to the details of this case, it raised my eyebrows, and with over a decade of working civil and criminal cases, my eyebrows rarely leave their designated location.


The employee’s story goes like this: he is on an international business trip and after working hard all week, decides to let off some steam by exploring the city. By the next morning, he is boarding a plane and returns to the States.



“I Was Kidnapped”


After about a week, accounting notices that the employee had racked up a five-figure charge on his corporate credit card with almost all of that accrued on a single night. The employee had never mentioned anything to his boss, accounting, or anyone else. When questioned, the employee recounts a harrowing experience. He claims that he was kidnapped, held the night in question for over six hours, all while the kidnappers kept him detained against his will, charging his corporate card on “extracurricular activities.”


The company’s leadership was suspicious, and they had every right to be. It seemed unusual that the card was not stolen, only compromised. First, according to the employee, the kidnappers returned it to him once they were partied out. Second, the employee never reported the attack to the international authorities, authorities stateside, or his own company.


At this juncture, Envista’s digital forensic experts join the story. The company’s representatives contacted Envista to perform a forensic examination of the employee’s iPhone. We performed the forensic extraction, or copy, of the data from the mobile phone and began the data analysis, which included recovered deleted data. Our team immediately noticed that applications were deleted shortly before the employee provided the iPhone to us for extraction, but we’ll come back to that in a moment.


An Apple Watch was connected to the phone and it had been permitted by the user to record health data. Miles of walking were recorded on the night in question, contradicting the employee’s claim that he was detained in a single location all night. Unless the kidnappers truly enjoyed watching someone walk in circles from sunset to sunrise. The kidnapping story is losing credibility by the moment, but motive has yet to be established.


Here we come back to those deleted applications. One of the applications was a text message application. Although the user deleted the application, we were able to recover the data from it. Also deleted was the Google Translate application.


After forensically stitching together the data artifacts, we recovered a message from the employee’s phone. The message was written in English, translated to the local language, and subsequently sent to an international number when the employee was catching a flight back home.


The message read as follows, “Last night was amazing and I can’t believe you can’t find a man. Someday someone will find you and it will all end up perfectly. Find someone who has the same passions as you do. I want you to know how special you are, you are so beautiful, perfect by American standards. Words can’t express how much I will miss you.”


From the sent message, we were left with two options. One, the employee had a rare and severe case of Stockholm Syndrome. Two, this story had more holes that swiss cheese. As you have surely deduced, our examination assisted in a good outcome for the company.


As to the employee, I don’t know what happened to him. Perhaps a fresh start at a new company or a really fun nickname at his current one.


*Disclaimer: Personal details and information from this story have been altered to protect parties involved.

Facial recognition technology is a type of biometric authentication that uses facial characteristics to identify or verify an individual. It operates by mapping and measuring facial features from a digital image or video frame and comparing these features against a database of known faces to find a match.



How Does Facial Recognition Technology Work?


Face Detection

The initial step in facial recognition technology is detecting and isolating faces within an image or video frame. Advanced algorithms identify patterns and shapes that resemble human faces. This stage ensures that the subsequent analysis focuses solely on the facial features.


Facial Feature Extraction

Once a face is detected, the algorithm proceeds to extract unique features from the facial image. These features encompass various aspects, including the distance between the eyes, the shape of the nose, the contour of the lips, and the pattern of wrinkles around the mouth. These details collectively form a distinctive facial signature.


Facial Feature Matching

The extracted facial features are then compared to a comprehensive database of known facial features. Facial recognition algorithms employ sophisticated mathematical techniques to calculate the similarity between the features extracted from the analyzed face and those stored in the database. Key factors involved in this process include:


  • Distance Between Facial Landmarks: Measuring the distance between critical facial landmarks, such as the eyes, nose, and mouth, provides crucial information for distinguishing individuals.

  • Shape and Contour of Facial Features: The unique shape and contour of facial elements, such as the nose, lips, and jawline, further contribute to the distinctiveness of each person's face.

  • Skin Texture and Wrinkles: Although subtler, skin texture and wrinkles can also be utilized to enhance the accuracy of facial recognition.


Identification or Verification

Based on the similarity score calculated during the feature-matching process, the facial recognition algorithm determines whether the analyzed face matches a known individual or remains unidentified. If the similarity score surpasses a predefined threshold, the face is identified as belonging to the person in the database. Conversely, if the score falls below the threshold, the face is classified as unknown.


Types of Facial Recognition Algorithms


Facial recognition algorithms can be categorized into two main types: feature-based algorithms and holistic algorithms.


Feature-Based Algorithms

Feature-based algorithms focus on extracting and matching specific facial features, such as the relative positions of facial landmarks and distinctive attributes. These algorithms excel at recognizing faces even in challenging conditions, such as poor lighting or partial occlusion.


Holistic Algorithms

Holistic algorithms treat the entire face as a single image and leverage statistical methods to compare faces. They are particularly effective in handling variations in facial expression and lighting, making them a preferred choice for applications where image quality may be suboptimal .


Accuracy and Challenges in Facial Recognition Technology


Accuracy

Facial recognition technology has made impressive strides in achieving high accuracy rates, with some algorithms boasting success rates exceeding 99%. However, it is not without its challenges, which include:


  • Image Quality: The quality of the input image significantly impacts the accuracy of facial recognition. Low-quality images, often obtained from surveillance cameras, can hinder the algorithm's ability to correctly identify faces .

  • Variations in Facial Appearance: Facial recognition may encounter difficulties when faced with variations in facial appearance caused by factors such as facial expressions, aging, or changes in lighting conditions. These variations can alter the facial features used for identification.

  • Database Biases: Facial recognition algorithms can exhibit biases if trained on datasets that are not representative of the overall population. This bias can result in inaccuracies, especially when recognizing individuals from underrepresented groups.


Privacy and Ethical Concerns


The widespread adoption of facial recognition technology has raised significant ethical concerns, particularly regarding privacy and mass surveillance. The ability to identify individuals without their consent has prompted concerns about privacy infringement and the potential for misuse.


Mass Surveillance

The deployment of facial recognition in public spaces has ignited debates about the prospect of a society under constant surveillance, where individuals may feel scrutinized in their daily lives. The capacity to track people's movements and activities using facial recognition technology has provoked anxieties about mass surveillance and its potential for abuse.


Algorithmic Biases

Facial recognition algorithms may display biases based on factors such as race, gender, and age, potentially leading to unfair outcomes and reinforcing societal prejudices.


Lack of Transparency

Many facial recognition algorithms are proprietary and lack transparency, making it challenging to assess their fairness and accuracy. This opacity raises questions about accountability and oversight.


Data Breaches

Facial recognition data is highly sensitive and can be exploited to monitor individuals' activities and whereabouts without their knowledge or consent. Any breach or misuse of this data can have far-reaching consequences.


Misidentification

Facial recognition is not infallible, and false positives can lead to mistaken identity. Such misidentifications can result in unwarranted scrutiny, loss of privacy, and even harm to individuals.


Case Study: China’s Implementation of Facial Recognition Technology


China has implemented facial recognition technology into almost every facet of a person’s daily life. While this does provide security and convenience, it has significant implications for privacy.


The Chinese government has actively promoted and implemented this technology to achieve a range of objectives. In terms of public safety and security, facial recognition is extensively used for surveillance, law enforcement, and monitoring in public spaces and transportation hubs. It aids in identifying suspects and enhancing overall safety. Moreover, access control and security in buildings, hotels, and residential complexes have been streamlined through facial recognition, replacing traditional methods like keycards and PIN codes. Chinese cities have also adopted this technology for ticketless access to public transportation, allowing passengers to enter subways and buses with facial scans. In education, facial recognition is employed for automated attendance tracking, reducing administrative tasks for educators.


Additionally, China has made significant strides in integrating facial recognition into its mobile payment systems. This innovation allows users to make secure transactions by linking their facial data to their financial accounts. Retailers have capitalized on this technology for marketing, tailoring advertisements and product recommendations based on customer demographics. It's even used in healthcare for patient identification and medical record access, streamlining healthcare services. Furthermore, at border control points and immigration checkpoints, facial recognition expedites entry and exit procedures, enhancing the efficiency of travel processes.


Notably, China continues in the development of their social credit system that incorporates facial recognition as a tool for tracking and assessing individuals' behavior, influencing social credit scores based on various factors. Despite its myriad applications and advantages, the extensive use of facial recognition in China has prompted debates and concerns surrounding privacy, surveillance, and potential misuse. While there are certainly benefits to using facial recognition technology at this scale, it also makes it nearly impossible to go about your day without having your moment-by-moment activity tracked and recorded for uses you know of and don’t know of.


Conclusion


Facial recognition technology represents a powerful tool with remarkable potential in various fields. It relies on intricate algorithms to analyze and compare facial features for identification and verification. While its accuracy has improved significantly, it is not without challenges, including issues related to image quality, variations in facial appearance, database biases, and privacy concerns.


The ethical considerations surrounding facial recognition are particularly complex, encompassing privacy infringement, mass surveillance, algorithmic biases, and the need for transparency and accountability. Striking a balance between harnessing the technology's advantages and addressing its drawbacks is essential to ensure responsible and ethical use in a rapidly evolving digital landscape.



Imagine having the ability to create brand-new content with just a few clicks of a button. This is the power of generative artificial intelligence (AI), a cutting-edge technology that can generate text, images, and videos based on existing data. While still in development, the potential of generative AI to revolutionize industries, such as marketing, entertainment, and product development, is truly incredible. With generative AI, the possibilities are endless.


The potential for generative AI to positively impact the world cannot be understated. However, it is crucial to be aware of the risks of misuse. In this article, I hone in on the challenges related to faked photos generated through artificial intelligence.  




 

Common Procedures Before the AI Era

Before generative AI, video and photo forensics experts used various methods to determine if a photo was fake. Some of the most common procedures included:


Analyzing Metadata

A photo’s metadata can contain information about the camera used to take the picture, as well as the date and time the photo was taken. Forensic experts can use this information to identify inconsistencies that may indicate a photo is fake. For example, suppose the metadata indicates that the photo was taken with a camera that was not yet available at the time that the photo is purported to have been taken. In that case, this is a clear sign of forgery.


Analyzing Lighting and Shadows

Forensic experts can look for inconsistencies in the lighting and shadows in a photo to identify signs of manipulation. If a shadow is going in the wrong direction, or if two objects are casting shadows in different directions, this could be a sign that the photo has been edited. Forensic experts use various tools and techniques to analyze the lighting and shadows in a photo, such as measuring the angles of shadows and comparing the brightness of different areas of the image.


Analyzing Textures

Forensic experts can also look at the textures in a photo to identify signs of manipulation. If someone's skin looks too smooth or plastic-like, this could be a sign that the image has been edited. Forensic experts can examine the photo's individual pixels and compare the textures of different objects in the photo.


Analyzing Reflections

Reflections in a photo can help forensic experts identify signs of manipulation. For example, if a reflection in a mirror differs from the object being reflected, this could be a sign that the photo has been edited. Forensic experts can use various tools and techniques to analyze the reflections in a photo, such as measuring the angles of reflections and comparing the brightness of different areas of the photo.


Specialized Video Forensic Software

There are specialized video forensic software programs that can be used to analyze photos for signs of forgery. These programs can look for inconsistencies in the lighting, shadows, textures, reflections, and other signs of forgery. For example, some software programs can be used to detect the presence of cloning, airbrushing, and liquefying.


While these methods are still relevant and useful for uncovering fakes created by generative AI, the technical challenges and expertise required to spot fakes have increased substantially. Even the most experienced video forensic examiners are challenged by fake photos created using generative AI. As generative AI technology develops, distinguishing between real and fake photos will become even more challenging. 


 

Stand Out Signs of a Faked Photo

As of this writing, generative AI has challenges in creating photos that can fool an experienced forensic examiner. There are various signs of a faked photo that an examiner would review, including:


Inconsistencies in Lighting and Shadows 

Generative AI models sometimes have difficulty creating realistic lighting and shadows. For example, a fake photo may have shadows that go in the wrong direction or that are too dark or too light.


Inconsistencies in Textures 

Generative AI models can also have difficulty creating realistic textures. For example, a fake photo may have skin that looks too smooth or plastic or hair that looks too perfect.


Inconsistencies in Reflections 

Generative AI models can have difficulty creating realistic reflections. For example, a fake photo may have a reflection in a mirror that is different from the object being reflected.



 

Examination Using Specialized Video and Photo Forensics Software

Fortunately, specialized video and photo forensics software in the hands of a qualified photo and video forensic expert is powerful and increasing in capability as the challenge and need to authenticate photo evidence rises daily. For example, an examiner armed with these tools can perform the following examinations:


File Analysis

By using databases of known images, the original unaltered image can potentially be located to see if it originated from a social media platform before being used to create a fake. This type of analysis can also uncover the originating device, such as the camera used to take it in some instances. 


Compression and Reconstruction 

With forensic software, an examiner can uncover if a photo has multiple compression ratios in the same image and if the original compression used differs from the photo in review. Both would indicate potential tampering, for example, if more than one photo were collaged in creating the fake. This analysis can also uncover artifacts related to resizing, color processing, rotation, or other modifications to an image.


Camera Identification

If the fake is made from a photo taken with a digital camera, it is possible to link it to the camera by the visual artifacts it creates. These artifacts are often undetectable to the human eye but can be used to link the tampered photo back to the device that took the picture. For example, if someone claimed they did not take the photo, but their camera can be positively identified as the device that took the photo by comparing the tampered photo and exemplar photos from the camera, their assertion would be proven false. 


Geometric Analysis

One of the most challenging parts of forging an image is keeping the lighting, shadows, and perspective consistent with what would be captured by a camera in reality. Forensic software can be used to analyze the visual scene captured by the photo to determine if the shadows are cast realistically if the highlighting makes sense on an object or person given the location of a light source, or if the angle by which the photo was taken makes sense compared to a realistic perspective.  


 

Suggestions for Attorneys and Claims Professionals

While their jobs are challenging enough, it is unfortunately true that attorneys and claims adjusters need to be more vigilant than ever before. A faked photo could be a screenshot of text messages containing a damning conversation or an alleged injury or assault. Complicating this issue is that the tools used to create generative AI photos are available to everyone and require a low level of technical sophistication to employ.


In general, it is wise to maintain a posture of incredulity concerning photos submitted as evidence. Here are some suggestions for attorneys and claims professionals: 


  • Be skeptical of photos submitted as evidence, especially if the original device the photos were allegedly taken on is gone and cannot be used as a source of verification.  

  • Request the device that took the photos, not just the photos themselves. If you find a photo does warrant examination by a photo and video forensic expert, having the device the photo was allegedly taken with aids in the examination process. 

  • When looking at the photo, even if you cannot spot anything in particular but the image feels off, it might be worthwhile to have it examined.  

While generative AI creates new challenges, the sophistication of forensic methods and tools to examine photos has also. As a community, the legal and insurance world has dealt with forged documents, manually faked photos, and other forms of misinformation. Knowing is the first step in preventing or remediating the impacts of faked photos and that starts by being aware that one showing up in your case or claim is a real and distinct possibility. As they say, a picture is worth a thousand words. At least it used to be.  

bottom of page