Author Topic: The Random Database Info Thread  (Read 4373 times)

0 Members and 1 Guest are viewing this topic.


adarqui

  • Administrator
  • Hero Member
  • *****
  • Posts: 30208
  • who run it.
  • Respect: +7365
    • View Profile
    • Email
Re: The Random Database Info Thread
« Reply #1 on: April 30, 2013, 08:33:48 pm »
0
REDIS

http://redis.io/topics/quickstart
http://redis.io/commands
http://redis.io/topics/mass-insert

make
make test
cp redis-server /usr/local/bin
cp redis-cli /usr/local/bin


Basically Sorted Sets are in some way the Redis equivalent of Indexes in the SQL world

Code: [Select]
$ redis-cli zadd hackers 1940 "Alan Kay"
(integer) 1
$ redis-cli zadd hackers 1953 "Richard Stallman"
(integer) 1
$ redis-cli zadd hackers 1965 "Yukihiro Matsumoto"
(integer) 1
$ redis-cli zadd hackers 1916 "Claude Shannon"
(integer) 1
$ redis-cli zadd hackers 1969 "Linus Torvalds"
(integer) 1
$ redis-cli zadd hackers 1912 "Alan Turing"
(integer) 1

monitor command:
http://redis.io/commands/monitor




try redis:
http://try.redis.io/


redis mass insert:
http://redis.io/topics/mass-insert



sunionstore:
Code: [Select]
redis> sadd mydeck 1
(integer) 1
redis> sadd mydeck 2
(integer) 1
redis> sadd mydeck 3
(integer) 1
redis> smembers mydeck
1) "1"
2) "2"
3) "3"
redis> sunionstore tempdeck mydeck
(integer) 3
redis> smembers mydeck
1) "1"
2) "2"
3) "3"
redis> smembers tempdeck
1) "1"
2) "2"
3) "3"


srandmember:
http://redis.io/commands/srandmember



http://blog.mjrusso.com/2010/10/17/redis-from-the-ground-up.html
http://www.redisgreen.net/blog/2013/03/18/intro-to-lua-for-redis-programmers/






twitter example:
http://redis.io/topics/twitter-clone

adarqui

  • Administrator
  • Hero Member
  • *****
  • Posts: 30208
  • who run it.
  • Respect: +7365
    • View Profile
    • Email
Re: The Random Database Info Thread
« Reply #2 on: April 30, 2013, 08:36:29 pm »
0
my redis-benchmark:

Code: [Select]
root@serv:~# redis-benchmark
====== PING_INLINE ======
  10000 requests completed in 0.08 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
128205.12 requests per second

====== PING_BULK ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
140845.06 requests per second

====== SET ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
140845.06 requests per second

====== GET ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
138888.89 requests per second

====== INCR ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
142857.14 requests per second

====== LPUSH ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
142857.14 requests per second

====== LPOP ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
140845.06 requests per second

====== SADD ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
142857.14 requests per second

====== SPOP ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
140845.06 requests per second

====== LPUSH (needed to benchmark LRANGE) ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
144927.55 requests per second

====== LRANGE_100 (first 100 elements) ======
  10000 requests completed in 0.16 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
62500.00 requests per second

====== LRANGE_300 (first 300 elements) ======
  10000 requests completed in 0.46 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

35.07% <= 1 milliseconds
99.30% <= 2 milliseconds
99.51% <= 3 milliseconds
100.00% <= 3 milliseconds
21929.82 requests per second

====== LRANGE_500 (first 450 elements) ======
  10000 requests completed in 0.68 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.28% <= 1 milliseconds
65.80% <= 2 milliseconds
99.94% <= 3 milliseconds
100.00% <= 3 milliseconds
14705.88 requests per second

====== LRANGE_600 (first 600 elements) ======
  10000 requests completed in 0.97 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.07% <= 1 milliseconds
13.67% <= 2 milliseconds
99.82% <= 3 milliseconds
99.92% <= 4 milliseconds
100.00% <= 4 milliseconds
10351.97 requests per second

====== MSET (10 keys) ======
  10000 requests completed in 0.14 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

98.12% <= 1 milliseconds
100.00% <= 1 milliseconds
72463.77 requests per second


adarqui

  • Administrator
  • Hero Member
  • *****
  • Posts: 30208
  • who run it.
  • Respect: +7365
    • View Profile
    • Email
Re: The Random Database Info Thread
« Reply #3 on: May 01, 2013, 12:14:36 am »
0
MONGODB

http://www.mongodb.org/ , click 'try it out' then type 'tutorial'

mongostat
mongo

mongo db shell is javascript based, run pure javascript from it:
for(i=0; i<10; i++) { print('hello'); };
etc.

explicitly save a document: db.bleh.save({})
db.scores.find({a: {'$gt': 15}});

to update a saved document:
db.bleh.update()

removing data:
db.bleh.remove()


adarqui

  • Administrator
  • Hero Member
  • *****
  • Posts: 30208
  • who run it.
  • Respect: +7365
    • View Profile
    • Email
Re: The Random Database Info Thread
« Reply #4 on: May 04, 2013, 10:45:59 am »
0
MYSQL:

incremental backup:
http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/mysqlbackup.incremental.html

master/slave replication:
http://www.packtpub.com/article/setting-up-mysql-replication-for-high-availability



statement vs row , replication:
http://dev.mysql.com/doc/refman/5.1/en/replication-formats.html
http://dev.mysql.com/doc/refman/5.1/en/replication-sbr-rbr.html
http://dev.mysql.com/doc/refman/5.1/en/binary-log-setting.html

variables:
http://dev.mysql.com/doc/refman/5.1/en/show-variables.html



table level lock issues:
http://www.mysqlperformanceblog.com/2012/07/31/innodb-table-locks/




mysql defrag:
http://blog.softlayer.com/2011/mysql-slow-check-for-fragmentation/


very useful table optimization script:
Code: [Select]
#!/bin/bash
 
MYSQL_LOGIN='-uroot --password=B360_623xRQCz'
 
for db in $(echo "SHOW DATABASES;" | mysql $MYSQL_LOGIN | grep -v -e "Database" -e "information_schema")
do
        TABLES=$(echo "USE $db; SHOW TABLES;" | mysql $MYSQL_LOGIN |  grep -v Tables_in_)
        echo "Switching to database $db"
        for table in $TABLES
        do
                echo -n " * Optimizing table $table ... "
                echo "USE $db; OPTIMIZE TABLE $table" | mysql $MYSQL_LOGIN >/dev/null
                echo "done."
        done
done




http://www.ntchosting.com/mysql/create-user.html
http://www.debian-administration.org/articles/39




replication: single db
http://stackoverflow.com/questions/8605318/mysql-replication-slave-server-on-one-database
http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_replicate-do-db
http://dev.mysql.com/doc/refman/5.1/en/binary-log-setting.html
http://dev.mysql.com/doc/refman/4.1/en/replication-howto.html
http://www.rackspace.com/knowledge_center/article/mysql-replication-masterslave


07-28-2013
Code: [Select]
FLUSH TABLES WITH READ LOCK;
RESET MASTER;
SHOW MASTER STATUS;

mysql> show MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      107 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)


SET GLOBAL binlog_format = 'ROW';

UNLOCK TABLES
 

put this in /etc/mysql/my.cnf:
binlog_do_db=production



on slave:
replicate-do-db=realtime_production



stop slave;
reset slave;
mysql ... < blah.sql

CHANGE MASTER TO
 MASTER_HOST='host',
 MASTER_USER='repl',
  MASTER_PASSWORD='slavepass',
  MASTER_LOG_FILE='mysql-bin.000003',
  MASTER_LOG_POS=239;

start slave;
show slave status \G;






when shit gets annoying:
while true ; do mysql -u root -pPW -e "STOP SLAVE;  SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; START SLAVE;" ; done



add user:
http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html
Code: [Select]
mysql> GRANT ALL ON *.* TO bar@'202.54.10.20' IDENTIFIED BY 'PASSWORD';



adarqui

  • Administrator
  • Hero Member
  • *****
  • Posts: 30208
  • who run it.
  • Respect: +7365
    • View Profile
    • Email


adarqui

  • Administrator
  • Hero Member
  • *****
  • Posts: 30208
  • who run it.
  • Respect: +7365
    • View Profile
    • Email
Re: The Random Database Info Thread
« Reply #9 on: July 02, 2013, 09:49:29 pm »
0
MONGODB

http://mongoosejs.com/docs/guide.html

http://docs.mongodb.org/manual/faq/fundamentals/

http://docs.mongodb.org/manual/reference/sql-comparison/

http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/

http://docs.mongodb.org/manual/tutorial/getting-started/

http://docs.mongodb.org/manual/reference/object-id/#ObjectIDs-DocumentTimestamps

https://blog.serverdensity.com/choosing-a-non-relational-database-why-we-migrated-from-mysql-to-mongodb/

http://java.dzone.com/articles/mysql-vs-mongodb-complete

http://docs.mongodb.org/manual/core/indexes/

http://docs.mongodb.org/manual/reference/method/

http://docs.mongodb.org/manual/tutorial/model-tree-structures-with-nested-sets/

NICE: http://docs.mongodb.org/manual/core/data-modeling/
http://docs.mongodb.org/manual/tutorial/model-tree-structures-with-parent-references/
http://www.10gen.com/presentations/mongodb-melbourne-2012/schema-design-example

http://glassonionblog.wordpress.com/2013/04/07/mongodb-aggregate-lengths-of-neste-arrays/

http://www.slideshare.net/kbanker/mongodb-schema-design

IMP!!  http://docs.mongodb.org/manual/reference/operator/positional/
IMP!! http://learnmongo.com/posts/reaching-inside-with-dot-notation-and-elemmatch/



Code: [Select]
./mongod --nojournal --dbpath /home/mongo &



MONGOOSE

http://mongoosejs.com/docs/index.html

http://mongoosejs.com/docs/guide.html

http://mongoosejs.com/docs/api.html

http://mongoosejs.com/docs/2.7.x/docs/indexes.html


dope: http://mongoosejs.com/docs/plugins.html


http://mongoosejs.com/docs/2.7.x/docs/query.html

https://github.com/LearnBoost/mongoose/wiki/3.6-Release-Notes

http://www.poeze.nl/wp/?p=166

http://blog.mongolab.com/2013/04/thinking-about-arrays-in-mongodb/






Code: [Select]
db.setProfilingLevel(2)
db.system.profile.find()
show profile
... db.createCollection( "system.profile", { capped: true, size:4000000 } )
DBQuery.shellBatchSize=100


misc:
Code: [Select]
for(var v = 0; v < 10 ; v++) { db.crs.insert({ userName : "poop", objId : v, txt : "comment" }) }
db.crs.update({objId:9},{$inc:{objId:1}})
db.images.update( { "_id" : { $exists : true } }, { $set : { r : Math.random() } }, false, true)

mongoimport --collection images ~adarqui/images.json


db.images.find({}).forEach(function(doc) { doc.r = _rand() ; db.images.save(doc) })

db.objects.renameCollection('objs')
db.getCollection('objs').drop()

 db.objs.find({ _id : ObjectId("51d6061be0cd85b2af8ce460") } )
db.objs.find({ _id : ObjectId("51d6061be0cd85b2af8ce460")} ).pretty()
> db.objs.find({ n : 'DerpCity', 'comments.text' : { $ne : 'yo' }})
> db.objs.find({ n : 'DerpCity', 'comments.text'  : 'yo' })





random notes, future:

insert if not exists:
Code: [Select]
key = {'key':'value'}
data = {'key2':'value2', 'key3':'value3'};
coll.update(key, data, {upsert:true});










presentations:
http://www.10gen.com/presentations/lessons-learned-migrating-2-billion-documents-craigslist
http://www.10gen.com/presentations/building-your-first-application-mongodb

adarqui

  • Administrator
  • Hero Member
  • *****
  • Posts: 30208
  • who run it.
  • Respect: +7365
    • View Profile
    • Email
Re: The Random Database Info Thread
« Reply #10 on: September 19, 2013, 10:53:47 pm »
0
REDIS SCRIPTS


testing:

redis 127.0.0.1:6379> eval "return redis.call('lrange', 'poop', 0, -1)" 0



Code: [Select]
require 'rubygems'
require 'redis'

r = Redis.new

enqUniq = <<EOF
    local i = 1
    local res
    res = redis.call('LRANGE', KEYS[1], 0, -1)
    while true do
        if res[i] == nil then
            res = redis.call('LPUSH', KEYS[1], ARGV[1])
            return 0
        elseif res[i] == ARGV[1] then
            return i
        end
        i = i + 1
    end
    return 0
EOF

puts ARGV[0]
puts ARGV[1]
x = r.eval(enqUniq, :keys => [ARGV[0]], :argv => [ARGV[1]])
puts x


http://rubydoc.info/github/redis/redis-rb/Redis#script-instance_method

http://www.dragonslife.org/lua-functions-in-redis-with-string-splitting/technology/

http://www.redisgreen.net/blog/2013/03/18/intro-to-lua-for-redis-programmers/

http://sunilarora.org/redis-lua-scripting

http://blog.gvm-it.eu/post/45421023510/redis-lua-scripting-four-scripts-to-help-you-deal-with


lua functions avail to redis:
https://gist.github.com/evilpacket/3924845



nodejs / redis lua test

Code: [Select]
        var lua = [
"function string:split(delimiter)",
"  local result = { }",
"  local from  = 1",
"  local delim_from, delim_to = string.find( self, delimiter, from  )",
"  while delim_from do",
"    table.insert( result, string.sub( self, from , delim_from-1 ) )",
"    from  = delim_to + 1",
"    delim_from, delim_to = string.find( self, delimiter, from  )",
"  end",
"  table.insert( result, string.sub( self, from  ) )",
"  return result",
"end",
        "local pids = ARGV[1]",
        "local posts = {}",
        "local i = 0",
"local s = string.split(pids,',')",
"for i = 0, #s, 1 do",
' local post_token = "post:" .. tostring(s[i])',
" local ret = redis.call('hgetall', post_token)",
" posts[i] = ret",
"end",
"return posts",


taco

  • Jr. Member
  • **
  • Posts: 77
  • Respect: +14
    • View Profile

adarqui

  • Administrator
  • Hero Member
  • *****
  • Posts: 30208
  • who run it.
  • Respect: +7365
    • View Profile
    • Email
Re: The Random Database Info Thread
« Reply #13 on: June 11, 2015, 01:16:21 am »
0
http://developer.olery.com/blog/goodbye-mongodb-hello-postgresql/

check some of these 'customer testimonials' though: https://www.mongodb.com/who-uses-mongodb

i've never had any problems with mongo.. though i've heard of people who have. at my current job, we have ~2T in one single mongo instance with 760+ million documents. Thing never once even hiccuped.. but that's just some small example from personal experience.

some of those testominials on their site are pretty impressive.

postgres is dope too.

pC!