The first step is to make calls to Mode API to periodically export report definitions, query contents and dashboards. A script can be scheduled (using cron or an orchestration platform such as Airflow) to visit the /reports, /queries and /spaces endpoints and store the JSON responses in a secure cloud storage bucket (AWS S3, GCP Storage or Azure Blob). This provides you with a version-backed up analytics asset. Keep each export time stamped, in case you need to roll back to a certain time in case some thing is deleted or corrupted.
In the case of underlying data, Mode does not store it, instead making requests to your connected warehouse (Snowflake, BigQuery, Redshift, etc.). In that way you support your actual data at the warehouse level. Don't forget to set snapshotting or time-travel backup option there as Mode only restores queries and reports and not the underlying tables.
Concerning restore: when a user deletes a report, they can rebuild the report out of the previous supported JSON using the API or move it in manually into the Mode UI. To ensure complete disaster recovery, you should document your process in a clear manner, including the backups that are available, the place they are and who is authorized to restore.
Finally, do not disregard access controls and encryption, encrypt all your backup files, encrypt them during transit, and only restore them to a position of an administrative role. In order to maintain it seamless, do a test restore once every three months - even basic dry run so that you do not get to find out that your API key is broken on the day you really need it.