What is GridFS?
What is it ?
a way to store files in your database that doesn’s suck.
a way to ensure that metadata is always kept with a file.
a way to get all the scaliing benefits of mongodb for files.
How Does it work?
Splits large object into small chunks 16mb in size . Each chunk is stored as a separate document in a chunks collection. Metadata about the file,including the filename, content type, and any optional information needed by the developer, is stored as a document in a files collection. These collections can have an abritrary namespace!
The size of each chunk. GridFS divides the document into chunks of the size specified here. The default size is 255 kilobytes. Changed in version 2.4.10: The default chunk size changed from 256k to 255k.
When querying, eache chunk is found and streamed seperatly keeping the amount of data in the buffer low GridFS places the collections in a common bucket by prefixing each with the bucket name. By default, GridFS uses two collections with names prefixed by fs bucket:
fs.files
fs.chunks
Chunks { "_id" :, // object id of the chunk in the chunks collection "files_id" :, // id of the corresponding files collection entry "n" :, // chunks are numbered in order, starting with 0 "data" : //the chunk's pay load as a BSON binary type } Files { id, // objectid filename, // the file name chunksize, // the size of eache chunk upladdate, // the date the record was creaed. md5, // md5 hash of the complete file length // todal length of the file bytes }
Why should i use it?
Archiving Highly avaliable Meatadata Versioning Dynamically expand capacity
Take a example.
[mongo@db231 ~]$ touch a.txt [mongo@db231 ~]$ mongofiles put a.txt -u anbob -p mongo connected to: 127.0.0.1 added file: { _id: ObjectId('5366f3f917d265ad5f30fde6'), filename: "a.txt", chunkSize: 261120, uploadDate: new Date(1399256057438), md5: "d41d8cd98f00b204e9800998ecf8427e", length: 0 } done! [mongo@db231 ~]$ mongofiles list -u anbob -p mongo connected to: 127.0.0.1 a.txt 0 [mongo@db231 ~]$ mongofiles get a.txt -u anbob -p mongo connected to: 127.0.0.1 done write to: a.txt [mongo@db231 ~]$ mongofiles put b.txt -u anbob -p mongo connected to: 127.0.0.1 added file: { _id: ObjectId('5366f4929f9a51b6a19d1de0'), filename: "b.txt", chunkSize: 261120, uploadDate: new Date(1399256210648), md5: "d41d8cd98f00b204e9800998ecf8427e", length: 0 } done! [mongo@db231 ~]$ mongofiles list -u anbob -p mongo connected to: 127.0.0.1 a.txt 0 b.txt 0 c.txt 12640 [mongo@db231 ~]$ mongofiles search a -u anbob -p mongo connected to: 127.0.0.1 a.txt 0 [mongo@db231 ~]$ mongo -u anbob -p mongo MongoDB shell version: 2.6.0 connecting to: test > show databases; admin 0.078GB local 0.078GB test 0.078GB > use test switched to db test > show collections; fs.chunks fs.files impuser system.indexes testtab > db.fs.files.find(); { "_id" : ObjectId("5366f3f917d265ad5f30fde6"), "filename" : "a.txt", "chunkSize" : 261120, "uploadDate" : ISODate("2014-05-05T02:14:17.438Z"), "md5" : "d41d8cd98f00b204e9800998ecf8427e", "length" : 0 } { "_id" : ObjectId("5366f4929f9a51b6a19d1de0"), "filename" : "b.txt", "chunkSize" : 261120, "uploadDate" : ISODate("2014-05-05T02:16:50.648Z"), "md5" : "d41d8cd98f00b204e9800998ecf8427e", "length" : 0 } { "_id" : ObjectId("5366f4b9b4651853819dff35"), "filename" : "c.txt", "chunkSize" : 261120, "uploadDate" : ISODate("2014-05-05T02:17:29.648Z"), "md5" : "3614afb5016f2495811bd500e6719285", "length" : 12640 } > db.fs.chunks.find(); { "_id" : ObjectId("5366f3f991d72e59b571490a"), "files_id" : ObjectId("5366f3f917d265ad5f30fde6"), "n" : 0, "data" : BinData(0,"") } { "_id" : ObjectId("5366f49291d72e59b5714913"), "files_id" : ObjectId("5366f4929f9a51b6a19d1de0"), "n" : 0, "data" : BinData(0,"") } { "_id" : ObjectId("5366f4b991d72e59b5714918"), "files_id" : ObjectId("5366f4b9b4651853819dff35"), "n" : 0, "data" : BinData(0,"VUlEICAgICAgICBQSUQgIFBQSUQgIEMgU1RJTUUgVFRZICAgICAgICAgIFRJTUUgQ01ECnJvb3QgICAgICAgICAxICAgICAwICAwIEFwcjMwID8gICAgICAgIDAwOjAwOjAyIGluaXQgWzVdICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKcm9vdCAgICAgICAgIDIgICAgIDAgIDAgQXByMzAgPyAgICAgICAgMDA6MDA6MDAgW2t0aHJlYWRkXQpyb290 ... -- over
对不起,这篇文章暂时关闭评论。