Topics 15
Last updated
Last updated
Let’s first define what “secure” means. A “secure” chat in a messaging App generally means the message is encrypted at the sender side and is only decryptable at the receiver side. It is also called “E2EE” (end-to-end encryption).
In this sense, is Telegram secure? It depends.
Telegram’s usual private and group chats aren’t end-to-end encrypted It generally means third parties can intercept and read your messages. Telegram uses the following approach for security:
The encrypted message is stored in Telegram servers, but split into several pieces and stored in different countries.
The decryption keys are also split and saved in different countries.
This means the hacker needs to get message chunks and keys from all places. It is possible but extremely difficult.
Secret chats are end-to-end encrypted If you choose the “secret chat” option, it is end-to-end encrypted. It has several limitations:
It doesn’t support group chat or normal one-to-one chat.
It is only enabled for mobile devices. It doesn’t support laptops.
B-Tree B-Tree is the most widely used indexing data structure in almost all relational databases.
The basic unit of information storage in B-Tree is usually called a “page”. Looking up a key traces down the range of keys until the actual value is found.
LSM-Tree LSM-Tree (Log-Structured Merge Tree) is widely used by many NoSQL databases, such as Cassandra, LevelDB, and RocksDB.
LSM-trees maintain key-value pairs and are persisted to disk using a Sorted Strings Table (SSTable), in which the keys are sorted.
Level 0 segments are periodically merged into Level 1 segments. This process is called compaction.
The biggest difference is probably this:
B-Tree enables faster reads
LSM-Tree enables fast writes
The diagram shows the many use cases by PostgreSQL - one database that includes almost all the use cases developers need.
OLTP (Online Transaction Processing) We can use PostgreSQL for CRUD (Create-Read-Update-Delete) operations.
OLAP (Online Analytical Processing) We can use PostgreSQL for analytical processing. PostgreSQL is based on HTAP (Hybrid transactional/analytical processing) architecture, so it can handle both OLTP and OLAP well.
FDW (Foreign Data Wrapper) A FDW is an extension available in PostgreSQL that allows us to access a table or schema in one database from another.
Streaming PipelineDB is a PostgreSQL extension for high-performance time-series aggregation, designed to power real-time reporting and analytics applications.
Geospatial PostGIS is a spatial database extender for PostgreSQL object-relational database. It adds support for geographic objects, allowing location queries to be run in SQL.
Time Series Timescale extends PostgreSQL for time series and analytics. For example, developers can combine relentless streams of financial and tick data with other business data to build new apps and uncover unique insights.
Distributed Tables CitusData scales Postgres by distributing data & queries.
Every few years, there is a special phenomenon that the second after “23:59:59” is not “00:00:00” but “23:59:60”. It is called leap second, which could easily cause time-processing bugs if not handled carefully.
Do we always need to handle leap seconds? It depends on which time representation is used. Commonly used time representations include UTC, GMT, TAI, Unix Timestamp, Epoc time, TrueTime, and GPS time.
Since everyone is talking about Twitter. Let’s take a quick look at what Twitter architecture looked like in 2012. This article is based on the tech talk given by a Twitter engineer. I redrew the diagram as the original diagram is difficult to read.
Very nice illustration of the Data Pipeline by Semantix. It may provide some insights into understanding data pipelines.
The data platform ingests, processes, analyzes and presents data generated by different data sources. A data platform manages all aspects of the data puzzle.
Modern data platforms offer a number of benefits, including centralized access to data across an organization, which eliminates silos and provides actionable insights.
Is it possible to achieve at least a 10x performance boost compared to the original Kafka and Cassandra? How to achieve that? What are the trade-offs?
There is an exciting class of storage software like 𝐑𝐞𝐝𝐩𝐚𝐧𝐝𝐚 and 𝐒𝐜𝐲𝐥𝐥𝐚𝐃𝐁 that boasts at least an order of magnitude improvement in performance.
Redpanda and ScyllaDB are used as examples in the diagram below. Redpanda can be compared to Kafka, while ScyllaDB is like NoSQL Cassandra.
No JVM, No GC
Kafka and Cassandra are written in JVM-compatible languages and usually suffer from high tail latency, where the average latency performs good but 99% latency is not so good due to GC (Garbage Collection).
Redpanda and ScyllaDB are rewritten from scratch using C++ and leverages some new frameworks (For example, SeaStar). They are hard to code but can achieve much higher performance (see the diagram below for detailed performance metrics).
Share-nothing Architecture
Every request is pinned to a CPU core. There is no memory contention between cores. This is also friendly to NUMA (Non-Uniform Memory Access) architecture, so that thread can access the memory closer to the CPU core.
Zero-copy Networking
Using the SeaStar framework, both products can access network devices directly in user mode, and the kernel is not involved. Zero-copy, zero-lock, and zero-context-switch.
It’s been a decade since Apache Kafka, and Apache Cassandra revolutionized how the software industry handled huge amounts of data.
Since then, the server CPU core count has grown 10x. Memory has grown from 64GB to half a TB. NVMe SSD drives are about 100 times faster than spinning disks from a decade ago. Network bandwidth at 25Gbps is commonplace.
A new class of software has come into the market to capitalize on this trend. We wrote this post to raise awareness about this trend.
Designing a system that supports millions of users is challenging, and it is a journey that requires continuous refinement and endless improvement. Let’s take a quick look at what are some of the key components powering the system.
Load balancer A load balancer evenly distributes incoming traffic among web servers that are defined in a load-balanced set.
Web servers Web server returns HTML pages or JSON response for rendering.
Databases: vertical scaling and horizontal scaling
Cache A cache is a temporary storage area that stores the result of expensive responses or frequently accessed data in memory so that subsequent requests are served more quickly.
CDN A CDN is a network of geographically dispersed servers used to deliver static content. CDN servers cache static content like images, videos, CSS, JavaScript files, etc.
Message queue A message queue is a durable component, stored in memory, that supports asynchronous communication.
Logging, metrics, automation When working with a small website that runs on a few servers, logging, metrics, and automation support are good practices but not a necessity. However, now that your site has grown to serve a large business, investing in those tools is essential.
RPC (Remote Procedure Call) is called “remote” because it enables communications between remote services when services are deployed to different servers under microservice architecture. From the user’s point of view, it acts like a local function call.
The diagram below illustrates the overall data flow for gRPC.
Step 1: A REST call is made from the client. The request body is usually in JSON format.
Steps 2 - 4: The order service (gRPC client) receives the REST call, transforms it, and makes an RPC call to the payment service. gPRC encodes the client stub into a binary format and sends it to the low-level transport layer.
Step 5: gRPC sends the packets over the network via HTTP2. Because of binary encoding and network optimizations, gRPC is said to be 5X faster than JSON.
Steps 6 - 8: The payment service (gRPC server) receives the packets from the network, decodes them, and invokes the server application.
Steps 9 - 11: The result is returned from the server application, and gets encoded and sent to the transport layer.
Steps 12 - 14: The order service receives the packets, decodes them, and sends the result to the client application.
DDD was introduced in Eric Evans’ classic book “Domain-Driven Design: Tackling Complexity in the Heart of Software”. It explained a methodology to model a complex business. There is a lot of content in this book, so I'll summarize the basics.
The composition of domain objects:
Entity: a domain object that has ID and life cycle.
Value Object: a domain object without ID. It is used to describe the property of Entity.
Aggregate: a collection of Entities that are bounded together by Aggregate Root (which is also an entity). It is the unit of storage.
The life cycle of domain objects:
Repository: storing and loading the Aggregate.
Factory: handling the creation of the Aggregate.
Behavior of domain objects:
Domain Service: orchestrate multiple Aggregate.
Domain Event: a description of what has happened to the Aggregate. The publication is made public so others can consume and reconstruct it.
Congratulations on getting this far. Now you know the basics of DDD. If you want to learn more, I highly recommend the book. It might help to simplify the complexity of software modeling.
How do we ensure when performing data migration? The diagram below shows how Apache Avro manages the schema evolution during data migration. Avro was started in 2009, initially as a subproject of Apache Hadoop to address Thrift’s limitation in Hadoop use cases. Avro is mainly used for two things: Data serialization and RPC.
Key points in the diagram:
We can export the data to object container files, where schema sits together with the data blocks. Avro dynamically generates the schemas based on the columns, so if the schema is changed, a new schema is generated and stored with new data.
When the exported files are loaded into another data storage (for example, teradata), anyone can read the schema and know how to read the data. The old data and new data can be successfully migrated to the new database. Unlike gRPC or Thrift, which statically generate schemas, Avro makes the data migration process easier.
Over to you: There are so many file formats for big data. Avro vs Parquet vs JSON vs XML vs Protobuf vs ORC. Do you know the differences?
The diagram below shows how data is encapsulated and de-encapsulated when transmitting over the network.
Step 1: When Device A sends data to Device B over the network via the HTTP protocol, it is first added an HTTP header at the application layer.
Step 2: Then a TCP or a UDP header is added to the data. It is encapsulated into TCP segments at the transport layer. The header contains the source port, destination port, and sequence number.
Step 3: The segments are then encapsulated with an IP header at the network layer. The IP header contains the source/destination IP addresses.
Step 4: The IP datagram is added a MAC header at the data link layer, with source/destination MAC addresses.
Step 5: The encapsulated frames are sent to the physical layer and sent over the network in binary bits.
Steps 6-10: When Device B receives the bits from the network, it performs the de-encapsulation process, which is a reverse processing of the encapsulation process. The headers are removed layer by layer, and eventually, Device B can read the data.
We need layers in the network model because each layer focuses on its own responsibilities. Each layer can rely on the headers for processing instructions and does not need to know the meaning of the data from the last layer.