John-Mark Gurney 05d4f383fb | 5 years ago | |
---|---|---|
papers | 6 years ago | |
sample | 6 years ago | |
README.md | 5 years ago | |
notes.txt | 5 years ago |
The idea for medashare is both a standard, but also an implementation of defining and sharing meta data about files. Some media contains the ability to embed meta data, e.g. mp3, some documents, video files, but not all files are able to convey the complete and rich information that may be desired. This will allow you to identify files, and have external associated meta data with various files.
The idea is also to be able to classify parts of the meta data as private (aka TLP:Red), such that the information will not be shared, so that you can mark files w/ your own tags and/or ratings, which will not be automatically shared.
There will also be able to define derivative works by the standard. For example, an album may have multiple tracks, so you can note that an mp3 file is a segment of the album, or that the mp3 file obtained from a music service is equivalent to the track. This will allow sharing meta data between mediums.
This derivation is also useful for when files are programmatically transformed. Say an image is resized, adding a note that the smaller file is a derivative work can allow others to reproduce the file, but also allow you to not have to reenter all the meta data associated with the new version of the file.
This can be useful for things like raw files on a camera, where you associate general picture information w/ the raw image data (but none of the associated processing data that files like CR2 contains) so that the meta data is not lost.
This work is inspired by my work on STIX, a Cyber Threat Intelligence standard, that has many similar requirements as meta data sharing.
Everything must have a type. Not having well defined types can lead to confusion and problems. Different encoding schemes have different ways of encoding types. If the encoding scheme has a native way to encode that type, it should be used. In some cases, e.g. JSON, there is no formal types beyond numbers and strings, and in this case, a type should (MUST? or via schemas?) be layered on top.
Look at adding units.
These are the nodes that contain a majority of the data.
The following properties are present on all (most?) objects: producer_ref UUID of the producer that created this object. Add signing info.
Properties: uuid UUIDv4 modified date of last modification dc: A Dublin Core property object_marking_refs Imported from STIX v2.0 Part 1: Section 3.1 granular_markings Imported from STIX v2.0 Part 1: Section 3.1 lang RFC XXXX language of the properties. parent_ref UUIDv4 of the parent MetaData Object. Any properties on this object override the parent. (allow deletion via None/null?) Any missing properties are passed through to the parent for resolution.
Opinion Properties: qualityrating On a scale from 1 (poor/terrible) to 5 (great/pristine), the subjective quality of the content.
The base object will contain all the data associated w/ the file (object). The base set of data is based upon the Dublin Core specification, as it provides a nice starting point, and will provide a good mapping to other systems out there.
There may be a link to another MetaData object from which this one is derived. If there is, all the meta data from the derived object (and the ones it derives from) must be included, except for the ones that have been marked deleted, or were overridden. When a property is marked as opinion, it should not be inherited. If the new author agrees with the opinion, then they have to restate the opinion in their object.
Custom properties must be preceded w/ a namespace. The name space is name followed by colon, as is demonstrated above w/ dc for Dublin Core.
The link to the meta data object must include the version referenced, as the referenced object may change. A three way merge may be needed when updating an object where the derived object has also been updated if the new information is wished to be used.
If a property is imported from the blog itself, it is recommended to mark it as such via the granular marking, see X for more info on how to do this.
Open Questions: When meta data is “declassified”, how do you maintain a link to the classified version?
Properties: uuid UUIDv4 blobhash Hash of the blob. This needs to be indexed metadata_ref UUID of the MetaData Object
This is the main mapping object. It maps a set of binary data to the MetaData object. All the data must be stored on the MetaData object. The reason it has a UUIDv4 is that this is your private mapping for the blog. You could possibly have multiple mappings, but most people will only have one, and this also allows you to publish your mapping, and coexist w/ other producer’s mappings.
Properties: uuid UUIDv5 If the stats do not match, check hash, create a derivative blob object, possibly? modified date of last modification of the object blobhash Hash of the binary data. stat Stats for the file, modified time, file size, used to detect when file has been changed/modified.
A file object references a blob Object, and contains information about the file name in the file system associated w/ the blob. This is used to speed up looking up blob objects.
A container object references one or more File objects. This is for representing containers such as zip or tar.gz files, but is also for BitTorrent hashes (event for single file torrents).
Similar to the File Object, but for web resources.
These are the edges that connect the nodes. For the most part they do not contain any data.
The two linked nodes, required to be File Objects, are equivalent.
Ask cvoid: Should a file system reference point to the blob hash or the uuidv4 of the blog object? blob hash requires a lookup, and maybe selection? Maybe both? To denote the selection. Likely File Objects are going to be private, so internal optimization? This will likely be different for URL objects as they are more public, where file system is often local only (unless on a shared, e.g. work, system).
DHT looks like a good option for finding things. IPFS is a great option for storing the data, and allowing peers to find the data, but it does NOT provide a search solution. It should be able to combine the hash tree crypto solution along w/ the DHT to provide a way to build up an index for a peice of data.
Need to look at DSHT Thoughts:
For search, you need two functions:
How to do lookup:
How to add an object:
Validating the object:
Attacks to prevent: Adding random hashes that don’t map to anything. Adding valid hashes that don’t have the proper term in them. When adding hashes, limit number of unverified hashes per block iteration.
Issue is, how do accept that a new block is valid: Some items are attempted to be fetched (likely based upon generation) and validated. Ones that are not validated are marked as suched, and after a period of time of remaining unvalidated are removed. New objects likely need to be validated in the immediately following block to help prevent bad growth. Likely there needs to be multiple “live” blocks that are intermingled. This can be done via a simple LSFR + count, likely dependant upon number of updates and size and difficulty of validating objects.
When updating, always check for n + 1 until n is not found. When publishing, depending upon timeframe, select n where n is smallest but still means time parameter.
IPFS not sure if it has a final hash mapping system, but each object in IPFS may have different multihash depending upon how blog/list is broken out.
Need a mapping system like: sha256:XXX -> ipfs:XLYkgq61DYaa8Nh3cq1U7rLinSa7dSHQ16x
This could/should include multiple other items? like maybe block level hashing, though if there’s an ipfs, that provides it..
Reference Info:
https://wiki.freedesktop.org/www/CommonExtendedAttributes/ Dublin Core reference by Freedesktop