Meet the Product Leaders series - Cosmos DB
Time: April 7, 2020, 10:00 AM–11:00 AM Pacific Time
Speaker: Andrew Liu, Lead Program Manager, Azure Cosmos DB
Join the "Meet the Azure Product Leaders " online series where you will have the opportunity to connect directly with the engineering teams who are building Azure products. You will get firsthand information about product updates including new features, roadmaps, best practices, etc. Each session will also include an “Ask Me Anything” forum where you can ask questions, provide product feedback and connect with the product team members.
In this session, we will cover a breadth view of Azure’s highly scalable NoSQL database - Cosmos DB. We’ll discuss real-world examples and trends on how Cosmos DB is being used in the wild, and then lead into a discussion on best practices including the top 3 things you should know to position a successful deployment. We’ll also cover our roadmap investments over the next Azure semester.
Please reply to this post and submit your questions in advance for the “Ask Me Anything” forum.
I'm accessing Cosmos DB directly through a Xamarin app using the V2 SDK. I've noticed that the memory usage by the SDK can get quite large (especially Newtonsoft objects). I've seen it as high as 80-100MB. Any suggestions on how I can manage and release that memory?
On top of my previous questions, we're also looking at how to use circuit breakers when Cosmos DB is being overloaded, to prevent our microservices from picking up any new events/messages from a service bus for a while. Would like to hear your thoughts on a good approach here as well! Especially interested in what would be advisable as cool-off period once the server sends out a 429.
I would definitely like more information on Autopilot Containers. Will I be able to convert an existing Container to Autopilot? If not, is there a migration path to gracefully move data from the older container to the new one?
We're really curious as to what the backup features will be.
We currently manually file a restore request which is taken care of by MS Support, and they restore the database to a seperate (new) account. We then move this database to another resource group in a non-production Azure subscription, and then update our KV references.
This basically means a lot of manual work just to be able to test impact of a release on production data.
Ideally we can use AZ CLI in our DevOps pipeline to fire restore command for certain accounts/databases/containers specified.