Fig. 8From: A comparison on visual prediction models for MAMO (multi activity-multi object) recognition using deep learningHuman activities from AVA datasets: a human atomic movements (crawl, bend, sit, stand); b human–object interactions (answer phone, driving, row boat, work on a computer) and c human–human interactions (hand shaking, kiss, hand clap, lift a person)Back to article page