Page 3 of 7 FirstFirst 12345 ... LastLast
Results 21 to 30 of 66

Thread: Hello

  1. #21
    Community Contributor
    Join Date
    Nov 2011
    Posts
    2,953
    I love achievement hunter.

  2. #22
    Who?
    Join Date
    Sep 2009
    Location
    Stockholm, Sweden
    Posts
    2,799
    Tomorrow I'm back on the job. I'll be looking at stability still as the service is currently resetting every 5 minutes.

    I think there is only a handful of you actually using it, but you'll notice bits going offline and frequent deployments as I include logging and perform isolation testing.

    You can kind of split up the code at procon.myrcon.com into tree seperate programs. One handles user interaction, the other listens and processes events and the other connects to and synchronizes data with C#.

    The last one is completely unit tested and the other two have very minimal testing. I'll also be included some very broad tests tomorrow while I isolate the problematic branch.

    I'll keep you posted on what I find.
    I started at DICE late Oct. 2014, so ignore every post before that.

  3. #23
    Who?
    Join Date
    Sep 2009
    Location
    Stockholm, Sweden
    Posts
    2,799
    Quote Originally Posted by Phogue View Post
    I'll keep you posted on what I find.
    tl;dr: MongoDb hates archiving stuff. Looking at alternatives for event/chat/ban logging.

    An internal balancer to divide tasks over as many processes we have touches a collection to let it know the node is still alive. This operation is getting halted in a queue when the events collection has grown, therefore the node drops off, throws an error and requires it to restart.

    Most of the database is very small in size and suited pretty well for Mongo, but the events collection is an archive that grows. It was limited to a single day of information, but I was hoping to expand on this so server admins have detailed history of their servers going back at least a week - potentially a month.

    I'll look into running another database just for archived data like this. It may be we use MySQL for archived data like this.
    I started at DICE late Oct. 2014, so ignore every post before that.

  4. #24
    Quote Originally Posted by Phogue View Post
    tl;dr: MongoDb hates archiving stuff. Looking at alternatives for event/chat/ban logging.

    An internal balancer to divide tasks over as many processes we have touches a collection to let it know the node is still alive. This operation is getting halted in a queue when the events collection has grown, therefore the node drops off, throws an error and requires it to restart.

    Most of the database is very small in size and suited pretty well for Mongo, but the events collection is an archive that grows. It was limited to a single day of information, but I was hoping to expand on this so server admins have detailed history of their servers going back at least a week - potentially a month.

    I'll look into running another database just for archived data like this. It may be we use MySQL for archived data like this.
    Check out:

    http://stackoverflow.com/questions/1...se-for-logging

    http://aws.amazon.com/simpledb/

    Graylog2:
    http://www.infoworld.com/t/log-analy...-rivals-237041
    Don't send me private messages (PMs) unless you really need privacy, like your game server password. If you just have a question or need help, post in one of the threads. It's extra work for me to answer questions and give help in private messages and no one else gets the benefit of the answer.

  5. #25
    Quote Originally Posted by Phogue View Post
    tl;dr: MongoDb hates archiving stuff. Looking at alternatives for event/chat/ban logging.

    An internal balancer to divide tasks over as many processes we have touches a collection to let it know the node is still alive. This operation is getting halted in a queue when the events collection has grown, therefore the node drops off, throws an error and requires it to restart.

    Most of the database is very small in size and suited pretty well for Mongo, but the events collection is an archive that grows. It was limited to a single day of information, but I was hoping to expand on this so server admins have detailed history of their servers going back at least a week - potentially a month.

    I'll look into running another database just for archived data like this. It may be we use MySQL for archived data like this.
    Use Couchbase, this is exactly the sort of data it's built to handle

  6. #26
    Hey wait, I have a coloured username? how did that happen!

    Oh and have fun with it all (again!). :-)

  7. #27
    Who?
    Join Date
    Sep 2009
    Location
    Stockholm, Sweden
    Posts
    2,799
    Quote Originally Posted by DarkLord7854 View Post
    Use Couchbase, this is exactly the sort of data it's built to handle
    I've got this up and running on the problematic collection on my local. I'll run it when the server is next full. If it solves the issue then I might as well transfer over to that.
    I started at DICE late Oct. 2014, so ignore every post before that.

  8. #28
    Who?
    Join Date
    Sep 2009
    Location
    Stockholm, Sweden
    Posts
    2,799
    Quote Originally Posted by Phogue View Post
    I've got this up and running on the problematic collection on my local. I'll run it when the server is next full. If it solves the issue then I might as well transfer over to that.
    So far the test is going good. I've coupled couchbase with elastic search for the events storage. This replaces the requirement of pulling tags out of the object and then indexing on this, but it does mean right now I'm not really using couchbase and could instead setup mongo to use elasticsearch.

    The issue with the current mongodb setup is the single events collection index exceeding available memory. If we didn't want to search the events collection then we wouldn't have a problem (yet)

    So some steps I'm going to take through out the week.
    • Move a lot of calculations from Peeler to Potato. Allows plugins to access more current data.
    • Minimize existing database requirements, removing the requirement of a lot of aggregation currently done there.
    • Move a majority of statistics building from Peeler to Potato.
    • Treat the current database as a simple storage, not something we generate data from.
    • Move to couchbase. It handled a test 10k ops/sec bursts and laughed at me.


    So far the Open Beta has said that it will scale right now if we don't have events logging, but it wouldn't be terribly cost effective. The peeler is running smooth on minimal cpu/memory usage, but I would like to get this to 'nearly nothing'.

    The potato farm is using a little more CPU but far less memory than I was expecting. We went with a memory-optimized EC2 instance but may swap this over to CPU-optimized, especially if I'll be moving some calculations from Peeler to Potato.

    Deployments, status, billing etc seem to be working as expected.

    Without interruption (doubtful..) I would expect all these optimizations and the database change over to be completed in two weeks. We'll assess stability then and move on to posting about it on the forums/twitter etc and get a few more people onto the system.
    Last edited by Phogue; 15-06-2014 at 11:06.
    I started at DICE late Oct. 2014, so ignore every post before that.

  9. #29
    Quote Originally Posted by Phogue View Post
    So far the test is going good. I've coupled couchbase with elastic search for the events storage. This replaces the requirement of pulling tags out of the object and then indexing on this, but it does mean right now I'm not really using couchbase and could instead setup mongo to use elasticsearch.
    I don't really see why you need elasticsearch on this? Couchbase will natively do most everything you need in terms of indexing and clustering, and it'll likely do it better than using it in conjunction with another service like elasticsearch

  10. #30
    Who?
    Join Date
    Sep 2009
    Location
    Stockholm, Sweden
    Posts
    2,799
    Quote Originally Posted by DarkLord7854 View Post
    I don't really see why you need elasticsearch on this? Couchbase will natively do most everything you need in terms of indexing and clustering, and it'll likely do it better than using it in conjunction with another service like elasticsearch
    Predominantly dreaming too big. The history currently has tags extracted from an object, stored along with the document and indexed by this array. The entire recording of events was commented out until open beta because I knew the entire solution required additional attention.

    Here is everything we store about a chat right now, kind of like a mini snapshot of a player and what they said.

    Code:
    {
        "_id" : ObjectId("53807f9fc7f9598c3ac477a7"),
        "Name" : "ProtocolChat",
        "CommunityId" : ObjectId("526a3068a069e4e425000000"),
        "ConnectionGuid" : "4b343b3b-313c-4808-96c2-eb62f26e76cd",
        "Data" : {
            "Now" : {
                "Content" : [ 
                    "fuck"
                ],
                "Players" : [ 
                    {
                        "Port" : "25862",
                        "Ip" : "85.225.80.243",
                        "Location" : {
                            "CountryCode" : "SE",
                            "CountryName" : "Sweden"
                        },
                        "Ping" : 41,
                        "Inventory" : {
                            "Now" : {
                                "Items" : [ 
                                    {
                                        "Tags" : [ 
                                            "Recon", 
                                            "Gadget", 
                                            "Explosive"
                                        ],
                                        "FriendlyName" : "C4",
                                        "Name" : "U_C4"
                                    }
                                ]
                            }
                        },
                        "Role" : {
                            "Name" : "2"
                        },
                        "Kdr" : 1.66666663,
                        "Deaths" : 3,
                        "Kills" : 5,
                        "Score" : 750,
                        "Name" : "bubblan1234",
                        "SlotId" : 25,
                        "Uid" : "EA_5454DFA8FE9A0E9BB4495BB96F0B85C8"
                    }
                ]
            }
        },
        "Tags" : [ 
            "ea_5454dfa8fe9a0e9bb4495bb96f0b85c8", 
            "bubblan1234", 
            "2", 
            "u_c4", 
            "c4", 
            "recon", 
            "gadget", 
            "explosive", 
            "sweden", 
            "se", 
            "85-225-80-243", 
            "25862", 
            "fuck"
        ],
        "__v" : 0
    }
    While it's not displayed, this information would eventually be available on https://procon.myrcon.com/en-gb/demo#/history with all those tags and time searchable.

    Eventually I was going to expire certain types of events and perhaps hold onto other events longer. It might be a big majority of data being stored can be removed after a few hours/days of storage, so only the highlights are archived. Mongo supports this on it's side, but only for the entire collection and couchbase would be able to expire individual documents.

    Without events logging each community currently takes up about 20-30mb of data, including the 48 hours of 1 minute interval statistics snapshots. The most part it's WYSIWYG and works as proxy to the Potato, caching information for speed alone.

    Extracting the tags isn't terribly taxing, storing the document isn't a massive amount of data but the index for that array of tags doubles the data size. Given my understand of Mongo, exceeding the 1.75gb of memory the current cluster has with a weeks worth of data then brings the entire system to a halt. Removing event logging has made the system spry again.

    So if I use another system for full text searching we have a separate system that does a majority of the work for me (and better), allows for partial text matching instead of complete tagging we have right now and if it grows to a fault it won't bring down the critical systems for an otherwise "pretty neat history feature"
    I started at DICE late Oct. 2014, so ignore every post before that.

 

 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •