← Back to Blog
Technical6 min read

Complete Guide to the Face Analytics API: Age, Gender, Expression, and Liveness in One Call

A detailed guide to the ARSA Face Analytics endpoint — get age estimation, gender detection, facial expression, and liveness verification from a single API call.

One Endpoint, Four Capabilities

The /face_analytics endpoint is the most information-dense endpoint in the ARSA Face Recognition API. A single call returns four types of analysis for every face detected in an image:

  • Age estimation — predicted age as a decimal number
  • Gender classification — male or female with confidence score
  • Expression detection — the dominant facial expression
  • Passive liveness — whether the face is a real person or a spoof attempt
  • No need for multiple API calls. One image in, one comprehensive response out.

    Making a Request

    The endpoint accepts a POST request with a face image:

    cURL

    bash
    curl -X POST "https://faceapi.arsa.technology/api/v1/face_analytics" \
      -H "x-key-secret: your-api-key" \
      -F "face_image=@photo.jpg"
    

    Python

    python
    import requests
    
    response = requests.post(
        "https://faceapi.arsa.technology/api/v1/face_analytics",
        headers={"x-key-secret": "your-api-key"},
        files={"face_image": open("photo.jpg", "rb")}
    )
    result = response.json()
    print(result)
    

    JavaScript (Node.js)

    javascript
    const fs = require('fs');
    const FormData = require('form-data');
    const axios = require('axios');
    
    const form = new FormData();
    form.append('face_image', fs.createReadStream('photo.jpg'));
    
    const response = await axios.post(
      'https://faceapi.arsa.technology/api/v1/face_analytics',
      form,
      {
        headers: {
          'x-key-secret': 'your-api-key',
          ...form.getHeaders()
        }
      }
    );
    console.log(response.data);
    

    Response Format

    Here is a complete example response:

    json
    {
      "status": "success",
      "faces": [
        {
          "age": 31.2,
          "gender": "male",
          "gender_probability": 0.96,
          "expression": "neutral",
          "bounding_box": [85, 62, 290, 310],
          "passive_liveness": {
            "is_real_face": true,
            "antispoof_score": 0.93
          }
        }
      ]
    }
    

    Let us break down every field.

    Top-Level Fields

    | Field | Type | Description |

    |-------|------|-------------|

    | status | string | "success" or "error" |

    | faces | array | Array of face objects detected in the image |

    Face Object Fields

    | Field | Type | Description |

    |-------|------|-------------|

    | age | float | Estimated age in years (e.g., 31.2). Typically accurate within a few years. |

    | gender | string | "male" or "female" |

    | gender_probability | float | Confidence score for the gender prediction, from 0 to 1. Values above 0.9 indicate high confidence. |

    | expression | string | Detected facial expression. One of: "neutral", "happy", "sad", "surprise", "anger" |

    | bounding_box | array | Coordinates of the face in the image as [x1, y1, x2, y2] (top-left and bottom-right corners) |

    | passive_liveness | object | Liveness detection results |

    Passive Liveness Object

    | Field | Type | Description |

    |-------|------|-------------|

    | is_real_face | boolean | true if the face appears to be a live person, false if it looks like a photo, screen, or mask |

    | antispoof_score | float | Confidence score from 0 to 1, where higher values indicate a real face |

    For a deeper dive into liveness, read how passive liveness detection works.

    Multiple Faces

    If the image contains multiple people, the faces array includes an entry for each detected face:

    json
    {
      "status": "success",
      "faces": [
        {
          "age": 28.5,
          "gender": "female",
          "gender_probability": 0.98,
          "expression": "happy",
          "bounding_box": [50, 30, 200, 220],
          "passive_liveness": { "is_real_face": true, "antispoof_score": 0.91 }
        },
        {
          "age": 34.1,
          "gender": "male",
          "gender_probability": 0.94,
          "expression": "neutral",
          "bounding_box": [280, 45, 430, 250],
          "passive_liveness": { "is_real_face": true, "antispoof_score": 0.89 }
        }
      ]
    }
    

    Use the bounding_box coordinates to match each result to the corresponding face in the image.

    Best Practices for Accuracy

    Image Quality

  • Resolution: Ensure faces are at least 100x100 pixels in the image. Larger is better.
  • Lighting: Even, front-facing lighting produces the best results. Avoid harsh side lighting or backlighting.
  • Focus: Blurry images reduce accuracy across all analysis types.
  • Face Positioning

  • Angle: Front-facing or near-front-facing works best. Extreme profiles may reduce accuracy.
  • Obstruction: Sunglasses, masks, and heavy makeup can affect age, gender, and expression accuracy.
  • Distance: Moderate distance works best. Extreme close-ups or very distant faces are harder to analyze.
  • Expression Detection Tips

  • • The API returns the dominant expression at the moment the photo was taken.
  • • Genuine expressions are detected more reliably than posed or exaggerated ones.
  • • For applications tracking expressions over time, sample multiple frames and look at the distribution rather than relying on a single frame.
  • Age Estimation Tips

  • • Age estimation provides an approximation, typically within a few years of actual age.
  • • Results are most accurate for adults aged 18-65.
  • • Factors like makeup, facial hair, and lighting can shift estimates slightly.
  • • For age verification use cases, use the estimate as a screening tool with appropriate margins.
  • Batch Processing

    For applications that need to process many images, here is an efficient pattern:

    python
    import requests
    from concurrent.futures import ThreadPoolExecutor
    
    API_KEY = "your-api-key"
    API_URL = "https://faceapi.arsa.technology/api/v1/face_analytics"
    
    def analyze_single(image_path):
        try:
            response = requests.post(
                API_URL,
                headers={"x-key-secret": API_KEY},
                files={"face_image": open(image_path, "rb")},
                timeout=15
            )
            return {"path": image_path, "result": response.json()}
        except Exception as e:
            return {"path": image_path, "error": str(e)}
    
    # Process up to 5 images concurrently
    image_paths = ["img1.jpg", "img2.jpg", "img3.jpg", "img4.jpg", "img5.jpg"]
    
    with ThreadPoolExecutor(max_workers=5) as executor:
        results = list(executor.map(analyze_single, image_paths))
    
    for r in results:
        if "error" in r:
            print(f"{r['path']}: Error - {r['error']}")
        else:
            faces = r["result"].get("faces", [])
            print(f"{r['path']}: {len(faces)} face(s) detected")
            for face in faces:
                print(f"  Age: {face['age']:.0f}, Gender: {face['gender']}, "
                      f"Expression: {face['expression']}")
    

    Batch Processing Tips

  • Respect rate limits. Adjust max_workers based on your plan's rate limit. Start conservatively and increase.
  • Add retry logic. If a request fails due to rate limiting (HTTP 429), wait and retry.
  • Handle errors gracefully. Some images may not contain faces or may fail to upload. Always check the response status.
  • When to Use Face Analytics vs Recognition

    | Goal | Endpoint |

    |------|----------|

    | Analyze demographics (age, gender, expression) without identifying anyone | /face_analytics |

    | Identify who someone is from a registered database | /face_recognition/recognize_face |

    | Verify two faces are the same person | /face_recognition/validate_faces |

    | Register a face for future recognition | /face_recognition/register_face |

    The /face_analytics endpoint is privacy-friendly because it does not perform identification. You get rich demographic and expression data without storing or matching any identity. This makes it ideal for audience analytics, customer experience monitoring, and age verification.

    Note that the recognition endpoints (recognize_face, validate_faces) also return age, gender, and expression data automatically. If you are already using recognition, you get analytics for free.

    Error Handling

    Always handle potential errors:

    python
    response = requests.post(
        API_URL,
        headers={"x-key-secret": API_KEY},
        files={"face_image": open("photo.jpg", "rb")}
    )
    
    if response.status_code == 200:
        data = response.json()
        if data["status"] == "success":
            if len(data["faces"]) > 0:
                # Process faces
                pass
            else:
                print("No faces detected in the image")
        else:
            print(f"API error: {data.get('message', 'Unknown error')}")
    elif response.status_code == 401:
        print("Invalid API key")
    elif response.status_code == 429:
        print("Rate limit exceeded - wait and retry")
    else:
        print(f"HTTP error: {response.status_code}")
    

    Getting Started

    The face analytics endpoint is available on all plans, including the free tier with 100 API calls per month.

  • Create your account
  • Copy your API key from the dashboard
  • Make your first call using the examples above
  • For specific use cases, explore our guides on facial expression detection, age and gender estimation, and building an emotion-aware app.

    Ready to get started?

    Try ARSA Face Recognition API free with 100 API calls/month.

    Start Free Trial