Blockchain infrastructure developer Brevis has introduced a media verification system called Brevis Vera that aims to confirm whether published images or videos originate from real-world capture events.
The system focuses on cryptographic provenance rather than detection. Instead of trying to determine whether content appears artificially generated, Vera allows media to carry verifiable proof of its origin and editing history.
The approach reflects growing concern about the increasing quality of synthetic media. Advances in generative AI have made deepfakes difficult to distinguish from authentic images or video footage, even for trained observers. Tools designed to detect synthetic media often struggle to keep pace as generation techniques evolve.
Verifying media from capture to publication
Brevis Vera builds on the Coalition for Content Provenance and Authenticity framework, commonly referred to as C2PA. The standard allows compatible devices to cryptographically sign media at the moment it is captured, linking the file to a specific piece of hardware and creating tamper-evident provenance data.
This signature establishes that the original content was recorded by a physical device rather than generated digitally. However, raw images and videos are rarely published without edits, which creates challenges for preserving authenticity throughout the production process.
Vera attempts to address this issue by preserving cryptographic proof across the entire media lifecycle. The system tracks transformations applied during editing while maintaining a verifiable link to the original signed file.
Zero-knowledge proofs for editing workflows
To maintain this chain of authenticity, Vera uses the Brevis Pico zkVM, a zero-knowledge virtual machine designed to generate proofs of computational processes.
When supported editing tools modify a media file, Vera processes the original signed file and the editing operations. It then produces a zero-knowledge proof confirming that the final version derives from the original capture and that only permitted transformations were applied.
The proof verifies three conditions: the published output originates from the signed source file, all modifications were legitimate transformations, and no hidden edits or inserted content were introduced during the process.
Because the system uses zero-knowledge proofs, the verification process can occur without exposing the raw media file or the details of the editing workflow.
Verification in Vera does not rely on centralized authorities or proprietary platforms. Anyone with access to the proof can independently confirm the authenticity of the published media without needing access to the original source files.
The system is also designed to work with open-source editing libraries, allowing developers to integrate verification capabilities into media production workflows.
According to the project, this model shifts the focus from visual inspection or AI detection toward cryptographic proof. Instead of asking whether a piece of media appears genuine, the system attempts to determine whether it can demonstrate verifiable provenance.
As synthetic media tools become more widely available, systems that establish authenticity at the moment of capture and preserve that proof throughout editing may become increasingly important for journalism, research, and digital publishing.

Dan Burgin
Vladislav Sopov