ZAMBA CLOUDComputer vision for wildlife conservation

About Zamba Cloud Get Started Log In

What is Zamba Cloud?

Zamba Cloud makes it easier to handle large amounts of camera trap videos for research and conservation.

Zamba Cloud uses machine learning to automatically detect and classify animals in camera trap videos. You can use Zamba Cloud to:

  • Classify which species appear in a video, including identifying blank videos
  • Train a custom model with your own labeled data to identify species in your habitat
  • And more 🙈 🙉 🙊

All without writing a line of code!

To generate predictions for unlabeled videos

Zamba includes multiple state-of-the-art, pretrained machine learning models for different contexts. Each is publicly available and can be used to classify new videos.

To generate new predictions, you'll upload videos and receive a spreadsheet telling you what species are most likely present in each video, allowing you to weed out false triggers and get straight to the videos of interest.

After creating an account, you can either upload videos directly or point Zamba Cloud to an FTP server where your videos are stored. Then, get back to your day while Zamba Cloud processes your videos. You'll get an email when it's done. Simply log back into your account and a spreadsheet with labels for each video will be waiting for you!

Step by step instructions

To train a model based on labeled videos

Users can upload additional labeled videos, and Zamba will train a new custom model.

After creating an account, you'll upload a set of videos and their correct species labels. These will be used to improve the African species classification model, building on what the model has already learned from thousands of training videos. The model can even learn to predict entirely new species and new ecologies, no matter where they are! Then you can go do some birdwatching, and you'll receive an email when your new model is ready. Your specialized model will then be available to classify any videos you upload to Zamba Cloud!

Step by step instructions

What models are available in Zamba Cloud?

Zamba Cloud comes with a few official model options that are pretrained for different contexts. The table below summarizes their relative strengths.

Model Geography Relative strengths
Blank vs. non-blank Central Africa, East Africa, West Africa, and Western Europe Just blank detection, without species classification
African species Central, East, and West Africa Recommended species classification model for jungle ecologies
African species (slowfast) Central, East, and West Africa Potentially better than the default African species model at small species detection
European species Western Europe Trained on non-jungle ecologies

All models except the African species (slowfast) model use an image-based architecture. The slowfast model uses a video-native architecture, which in some cases can better capture motion to detect small species. Check out the zamba python docs for more model details.

How accurate is Zamba Cloud?

To assess accuracy, model predictions were compared with labels manually applied by researchers and citizen scientists. Performance was measured on videos that were not part of the model's training data, which provides a more accurate picture of how the model will perform on other new data (like yours!).

Species classification
Model Top-1 accuracy Top-3 accuracy
African species 82% 94%
African species (slowfast) 61% 80%
European species 79% 89%
  • Top-1 accuracy is how frequently the correct label is the top label predicted by the model. Eg. The African species model predicted the correct species for 82% of videos.
  • Zamba Cloud outputs the top 3 predicted labels to enable researchers to easily surface videos of interest. Top-3 accuracy is how frequently the correct label is in the top three species with the highest predicted probabily. Eg. For the African species model, the correct label was in the top three predicted for 94% of videos.

Blank detection

All the species models contain a blank output label in addition to the species labels. This means either a species model or the blank model can be used for blank detection. The key difference is that the blank model only outputs the probability that the video does not contain an animal and does not do any species classification for nonblank videos. The table below compares blank performance between the African species model and the blank model.

Model Precision for blank videos Recall for blank videos
Blank vs. non-blank 83% 89%
African species 84% 87%
  • Precision is the percent of videos that the model labels as blank that are actually blank. For example, 83% of the videos that the African species model labels as blank are actually blank.
  • Recall is the percent of all blank videos that are correctly classified by the model. For example, the blank model correctly detects 89% of actually blank videos.

Our goal is to continually improve these algorithms, and you can help! The most valuable contribution to this effort is additional labeled data. Find out more about how you can submit a correction for the videos that Zamba Cloud got wrong. Or, if you have videos that are already labeled, you can share labeled data directly with us.

What species can Zamba Cloud identify?

Blank vs. non-blank model

The blank vs. non-blank model only outputs the probability that the video is blank, meaning that it does not contain an animal. It does not provide any species classification.

African species models
Aardvark
Antelope/Duiker
Badger
Bat
Bird
Blank
Cattle
Cheetah
Chimpanzee/Bonobo
Civet/Genet
Elephant
Equid
Forest buffalo
Fox
Giraffe
Gorilla
Hare/Rabbit
Hippopotamus
Hog
Human
Hyena
Large flightless bird
Leopard
Lion
Mongoose
Monkey/Prosimian
Pangolin
Porcupine
Reptile
Rodent
Small cat
Wild dog/Jackal
European species model
Bird
Blank
Domestic cat
European badger
European beaver
European hare
European roe deer
North American raccoon
Red fox
Weasel
Wild boar

Don't see what you're looking for? We are always open to expanding the list of species that Zamba can identify—see this FAQ question for more information.

Sign up or log in

To begin, either sign up to create an account or log in to your existing account.

Sign Up Log In

How to classify unlabeled videos

This section walks through how to use Zamba Cloud if you you don't know what species each of your videos contains and want to generate labels.

Tutorial

After logging in, you'll be taken to the Uploads page, where you will upload the videos you want Zamba Cloud to process. There are two ways of submitting videos: Direct upload or FTP upload.

Zamba currently supports the following video file formats: avi, mp4, mpeg, asf.

Wondering how long it will take? See this FAQ question for more information.

Direct upload

Use this option to upload videos from local files on your computer.



  1. Go to the Direct Uploads tab of the Uploads page. Click the + New Direct Upload button. Direct uploads work best with a fast internet connection, as limited bandwidth can cause the browser to time out partway through the upload. If you have a slow internet connection, either check the fast upload box (which uses your computer's resources to reduce the size of the video before uploading), or consider an FTP upload instead.
  2. Drag and drop videos or click Select File(s) to select the video(s) you want to upload from your computer. When you are finished selecting videos, click Upload.
  3. You may add an optional title and/or description. Use the ML MODEL dropdown menu to choose which of Zamba Cloud's pretrained models you'd like to use to classify your videos (if you have trained any of your own models, they'll appear here too!).
  4. Click Begin Processing.
FTP upload

In order to use this option, your videos have to already be uploaded to an FTP server. Many large organizations and universities run FTP servers, which can be used to host data for Zamba Cloud.



  1. Go to the FTP Uploads tab of the Uploads page. Click the + New FTP Upload button.
  2. Enter the FTP server URL path, username, and password in the corresponding fields and click Upload from FTP. Keep in mind that with the FTP submission, all the videos in the specified folder will submitted to Zamba Cloud. You will not be able to pick and choose which videos within the folder are processed.
  3. Use the ML MODEL dropdown menu to choose which of Zamba Cloud's pretrained models you'd like to use to classify your videos (if you have trained any of your own models, they'll appear here too!).
  4. Click Begin Processing.

Downloading video labels

Once you have submitted videos, you can see their status on the Uploads page under the corresponding "Direct Uploads" or "FTP Uploads" tab. The status will say "Zamba processing succeeded" when the labels spreadsheet is ready for download.

Since it can take a few days to process a large quantity of videos, we'll send you an email from zamba@drivendata.org when the labels are ready. In the meantime, feel free to close the webpage and take a walk.



The link in the email will take you to your Uploads page, where there will be a button that says Download Labels. Click on this to download the csv file, which can be opened in Excel, Numbers, Google Sheets, or read by analytic software like R or Python.

You can re-download the labels at any point from your Uploads page when you are logged in. Once your videos are uploaded, you can also generate labels with other models by clicking Run different model.

Understanding the labels spreadsheet

For each species label in the list above, Zamba Cloud uses an advanced computer vision model to estimate the probability that that label applies to the video. Probabilities range between 0 and 1. For each species label, 0 means the species is definitely not in the video, 0.5 means there's a 50% chance the species is in the video, and 1 means the species is definitely in the video. For example, if the column for blank probability is 0.95 there is a 95% chance that the video is blank.

The labels spreadsheet has a row for each video. The columns are:

  • video_uuid: a Zamba-generated unique ID for the video
  • original_filename: original video filename (for direct uploads) or filepath (for FTP uploads)
  • top_1_label through top_3_probability: The next six columns contain the name and corresponding probability for the top three "most likely" species (i.e. the labels with probabilities closest to 1). These columns are intended as a shortcut to make filtering for videos of interest easy.
  • The remaining columns contain the probabilites for all of the possible labels, based on the model used for classification. These will not sum to 1 as there can be more than one species present in each video. Zamba Cloud individually estimates the probability that each species is present.

A labels spreadsheet could look like this:

video_uuid original_filename top_1_label top_1_probability top_2_label top_2_probability ... bird blank cattle ... corrected_label
2056f94c eleph.MP4 elephant 1.0000 wild_dog_jackal 0.0064 0.0021 0.0003 0.0046
e267d8f3 leopard.MP4 leopard 1.0000 small_cat 0.0203 0.0001 0.0000 0.0171

Filtering videos by species

You can filter all of your uploaded videos based from the Videos page. Adjust the threshold slider based on what level of certainty you'd like to have that the videos contain the given animal.



For more fine-grained filtering, use the labels spreadsheet downloaded in the previous step. A few basic approaches:

  • Filter by the most likely species columns: top_1_label, top_2_label, top_3_label. The spreadsheet makes it easy to get a list of videos where you animal of interest is in the top 1, 2, or 3 most likely labels.
  • Filter by a the probability column for a specific species. For example, say you want to see only videos that have at least an 80% chance of containing a lion. Set a filter on the lion column for rows with a value of 0.8 or greater.

Either of these approaches can be used to filter out videos that contain no wildlife. Either filter for rows where none of the top 3 labels are blank, or apply a "less than" filter on the blank probability column.

Submitting corrections

Zamba Cloud relies on user-labeled data to improve its predictions. If you have videos where Zamba Cloud did not predict the right species, let us know!

The easiest way to submit corrections is using the downloaded labels spreadsheet.

  1. Fill in the corrected_label column - the last column in the spreadsheet.

    • The label you put in this column must exactly match the column name for the species. To see the correct name formatting, find the corresponding probability column for the correct species. Eg. chimpanzee_bonobo, domestic_cat
    • The file should be saved as a CSV, which should be an export option from your spreadsheet tool.
    • Columns other than video_uuid and corrected_label are ignored for the corrections spreadsheet so you can leave these exactly how they were downloaded from Zamba Cloud.
    • If you have multiple species in a video, copy the entire row so the same video appears twice with one corrected_label per row (as shown above for video 123-456).
    • For videos where the top_1_label is correct, the corrected_label column can be left blank or you can confirm the correct label by entering it in that column.

    Example spreadsheet of corrections:

    video_uuid ... ... corrected_label
    9ab-c65 ... ... blank
    89a-000 ... ... ← video where label is already correct
    123-456 ... ... duiker ← video with multiple species
    123-456 ... ... forest_buffalo
  2. Select the Submit Correction tab in the upper right corner. Then drag and drop or click Upload File to select the corrections spreadsheet you want to upload.

  3. Finally, click Submit correction.

How to train your own model

This section walks through how to train a model tailored to your own species or ecosystems. To train a model, you need to have a set of videos that are already labeled with the correct species. Your species can be either a subset of the ones that official models are already trained to predict, or completely new ones!

The more training data you have, the better the resulting model will be. We recommend having a minimum of 100 videos per species. Having an imbalanced dataset - for example, where most of the videos are blank - is okay as long as there are enough examples of each individual species.

Video labels

Save a csv file with the correct labels for each training video on your local computer. The labels file should have columns for:

  • filepath: name of the video. Video file names must be unique, otherwise the species labels will not be able to match the correct video.
  • label: the correct species label. If your labels are a subset of the ones predicted by an official model, these should exactly match the pretrained model label as much as possible. If more than one species appears in a video, enter multiple rows with the same filepath value and a different label value.

Example label file:

filepath label
blank.MP4 blank
chimp.MP4 chimpanzee_bonobo
eleph.MP4 elephant
leopard.MP4 leopard

Tutorial with direct uploads

Use this option to upload labeled training videos from local files on your computer.

Direct uploads work best with a fast internet connection, as limited bandwidth can cause the browser to time out partway through the upload. If you have a slow internet connection, either check the "fast upload" box (which uses your computer's resources to reduce the size of the video before uploading), or consider an FTP upload instead.



Tutorial using an FTP server

In order to use this option, your training videos have to already be uploaded to an FTP server. Many large organizations and universities run FTP servers, which can be used to host data for Zamba Cloud.



Step by step instructions

  1. From the Trained Models tab, click either Train Model From Direct Uploads or Train Model From FTP Videos depending on where your videos are stored.
  2. Upload your labeled videos.

    • For direct uploads: Drag and drop or click Select Videos(s) to select the videos you want to upload.
    • For FTP uploads: Enter the FTP server URL path, username, and password in the corresponding fields. Keep in mind that with the FTP submission, all the videos in the specified folder will submitted to Zamba Cloud. You will not be able to pick and choose which videos within the folder are processed.
  3. Drag and drop or click Upload species label file to upload the csv of your video labels.
  4. Click Upload or Upload from FTP at the bottom. This step may take awhile if you are using a direct upload.
  5. On the next screen, use the dropdown menus under Zamba Cloud Match to match the labels in your videos to labels that the official models are already trained to predict. If your label is new, select None (use your label) from the dropdown. Then click Next: Summary.
  6. Enter a display name and (optionally) a description. You can also double check your training videos by clicking See all individual videos names and labels.
  7. You will be asked if you want to make the trained model available to other users. This will means that other Zamba Cloud users will be able to select this model when they run their own jobs; they won't have access to the videos or labels you upload.
  8. You will be asked if your videos can be used for training other models. If you check this box, it indicates that the administrators of the site may use these videos in the future to make improvements to the open-source algorithms that underlie Zamba Cloud. This does not make the videos publicly available or available to other Zamba Cloud users.
  9. Click Start training model. Your work is done! Models may take some time to train, so feel free to close the browser and go about your day.
  10. You will get an email from zamba@drivendata.org when your model is ready. You can check the status in the Trained Models tab. It will also be added as an option in the dropdown when you are choosing which model to use to classify new videos.

FAQ

How does Zamba Cloud work?

Under the hood, Zamba Cloud runs a computer vision algorithm trained on thousands of hours of camera trap videos that has learned to estimate the probability that different species are present in the video. For more information on the origin of Zamba, check out https://zamba.drivendata.org/

How much does Zamba Cloud cost?

Currently Zamba Cloud is supported financially through 2021 by the Max Planck Institute for Evolutionary Anthropology with in-kind support from Heroku and Microsoft AI for Earth. It is free to use during this time. Because your labels can always be downloaded as a spreadsheet, you are not locked in to continuing to use this tool to access the labels that the algorithm predicted.

How long will it take to process my videos?

Depending on network conditions, you can expect processing to take about 12 hours for 1,000 videos. If you're experiencing issues, try limiting uploads to fewer than 2,500 videos at a time. Keep in mind that your job may take longer if it is in the queue behind another user's long running job.

How does Zamba Cloud train new models from user data?

Computer vision algorithms are only as good as the labeled data they're trained on. As a result, our official models can only predict the species that are included in the labeled training data we had available. Luckily, Zamba Cloud is a quick learner!

If you able to gather labeled videos of your species of interest, you can pass those to Zamba Cloud and train a new model to identify your species. We recommend having at least 100 labeled videos of a species to retrain a model.

The retraining process starts with one of an official model, which has already learned from thousands of hours of camera trap footage. The model then continues training on the new data that you've provided.

For example, say that you want to identify ostriches specifically (who wouldn't? They're huge!). Our official model can identify "Large flightless bird," but that also includes things like emus. If you provide new labeled videos of "ostriches" specifically, the model will be able to combine its existing knowledge of how to identify all large flightness birds with the new information about how to distinguish ostriches from other species.

How can I contribute to Zamba Cloud?

We're always looking for partners who can share their data to help us improve the accuracy on current species detection as well as expand to new species.

You can let us know where Zamba Cloud got things wrong by submitting corrected labels. If you have additional data that is already labeled, we'd love to hear from you at zamba@drivendata.org.

Can I access the algorithms behind Zamba Cloud?

The algorithms behind Zamba Cloud are openly available for anyone to learn from and use. You can find the latest version of the project codebase on GitHub.

These the project is structured as an open source command line tool and Python package, where you can run inference, train your own models, and even make your models available to the community.

Where did the labeled videos used in Zamba Cloud come from?

For details about the labeled videos used to train the official models in Zamba Cloud, see the main Project Zamba webpage.

This application has been developed and made available thanks to the generous support of the Max Planck Institute for Evolutionary Anthropology, the Arcus Foundation, and the Patrick J. McGovern Foundation.

Our video processing algorithms run on Microsoft Azure thanks to support provided by Microsoft's AI for Earth program.

Created with support from our friends at Heroku

Built and maintained by DrivenData