Arbitrum 2nd Milestone Completion Announcement

The milestone focuses on finalizing the simulation engine by implementing security features, cloud deployment, testing, community engagement, and documentation. Key areas include:

  • Security: Implementing robust user authentication systems.
  • CI/CD & Deployment: Seamless cloud deployment and scaling.
  • Platform Testing: Running simulations to validate performance and scalability.
  • Documentation: Creating user-friendly guides and technical documents for easy onboarding.
  • Community Engagement: Hosting regular risk calls to discuss protocol security and market trends.

This milestone ensures the platform's readiness for real-world use, enhances community interaction and strengthens protocol security within the Arbitrum ecosystem.

Objectives & Scope of Collaboration

Cloud Deployment and Architecture Overview

The platform automates deployment using the AWS TypeScript SDK with Terraform, ensuring scalability and efficient management. AWS ECS orchestrates thousands of parallel simulations in isolated containers, with Fargate as an alternative for auto-scaling and efficient resource use. Graviton processors (ARM64) reduce power consumption and costs by up to 60%. Each container’s lifecycle aligns with the duration of its simulation, allowing seamless parallel execution without affecting other operations.

Data storage is handled by DocumentDB in a private VPC, offering secure, high-speed NoSQL capabilities. SQS queues and Lambda functions, or ECS Fargate workers, drive event-triggered operations, ensuring optimal platform performance and responsiveness.

CI/CD Integrations

We are using our own worker nodes running in a safe environment in multiple EC2 instances that work along with GitHub actions to ensure the reliability of deployment and almost 10X faster build times on native ARM-based machines. 

Platform Testing

Testing with Protocols on the Arbitrum Ecosystem

Once the cloud deployment was completed, the platform underwent extensive testing by running thousands of simulations simultaneously. This was crucial for assessing the system's ability to scale efficiently and maintain performance under high-load conditions. The goal was to ensure that the platform could handle multiple concurrent tasks without experiencing slowdowns or failures, highlighting its robustness and scalability.

The ability to run numerous simulations in parallel is essential for the platform to simulate various scenarios and stress-test protocols. This feature demonstrates the platform's capability to scale up as needed, making it suitable for real-world applications involving complex data and risk scenarios.

Generating Risk Scenarios

A dedicated team of researchers and data scientists were tasked with analyzing protocols and creating realistic risk scenarios. This included simulating adverse events like "Black Thursday," where significant market drops occurred, and "Stable Depegs," where stablecoins lose their peg. These scenarios allowed for comprehensive testing of protocols under various stress conditions.

To execute these tests, the team developed multiple scripts that defined the behavior of key elements:

  • Agent Scripts: Code representing different participants (agents) interacting with the system.
  • Scenario Scripts: Scripts that defined the conditions under which the tests would run, including specific risk events or market situations.
  • Observer Scripts: Used to monitor the simulation, track outcomes, and collect data for analysis.
  • Assertion Scripts: Scripts that evaluate whether the system behaved as expected under the defined conditions.
  • Smart Contract Integration: The platform's testing environment also included smart contracts from the protocols being tested. This integration helped in assessing how these contracts would perform under various scenarios, ensuring their mechanism designs could withstand different risk conditions.

Feedback Loop & Bug Fixes

Following initial testing, feedback from developers highlighted areas for improvement, such as bug fixes, interface enhancements, and feature requests. The platform was iteratively refined, focusing on resolving bugs, enhancing user experience, and incorporating new features based on protocol suggestions. This continuous feedback loop made the platform more user-friendly and robust, tailored to meet the demands of the Arbitrum ecosystem, ensuring its readiness for real-world challenges.

Conclusion

As part of their commitment to enhance protocol security and engage with the broader community, Chainrisk will continue to organize monthly community risk calls specifically for the Arbitrum ecosystem. These calls will be designed to serve as a platform for sharing insights, discussing new developments, and addressing any security-related concerns.

Download the Report PDF here
Download
Download the Report PDF here
Download

More related articles

View More
View More
November 8, 2024

Chainrisk Simulation Engine

Chainrisk Simulation Engine provides rapid, precise DeFi risk assessments using Rust-based simulations & on-chain data, optimizing protocols for resilience & cost-efficiency.
Research
November 6, 2024

Chainrisk VaR Methodology

Discover the essentials of Value at Risk (VaR), a key tool for measuring potential financial losses in portfolios. This guide covers how VaR works, from historical simulations to Monte Carlo methods, & its role in setting risk limits, stress testing, & supporting smart financial decisions.
Research
November 5, 2024

DeFi Lending & Borrowing Risk Framework

Explore the fundamentals of risk management in DeFi lending & borrowing. This guide covers key risk parameters—collateralization, liquidation, & interest rate curves—designed to enhance protocol stability. Learn how supply caps, liquidation thresholds, & volatility management interact to mitigate risks, supporting resilience & sustainable growth in DeFi.
Research
Contact us

Get in touch with risk management experts at Chainrisk

Leading economic security for Defi
Book an Audit