Serialize time::OffsetDataTime type using serde_as in serde_with crate
- Table design
- Database management using sqlx
- Writing Data Access Layer
- Write actix handler
- Request data through api
- Choose correct serialize method
- Request data through api after using serde_as
Table design
When we develop backend api using postgres and sqlx, you will definitely use date in your database design. Take user table as an example:
-- Add migration script here
CREATE TYPE gender AS ENUM ('male', 'female', 'other');
-- Table `users`
CREATE TABLE
IF NOT EXISTS users (
id BIGSERIAL PRIMARY KEY,
username TEXT UNIQUE NOT NULL,
gender GENDER NOT NULL,
disabled BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW ()
);
In the provided example, the users
table includes a created_at
column of type TIMESTAMPTZ
.
In PostgreSQL, the TIMESTAMPTZ
data type stands for "timestamp with time zone."
Database management using sqlx
We can also use sqlx
to generate migration and add the sql above to the migration script.
The following example will use sqlx
, which is SQLx's associated command-line utility for managing databases, migrations, to create database and generate the migration.
# create database
DATABASE_URL=postgres://localhost/test sqlx database create
# create migration
DATABASE_URL=postgres://localhost/test sqlx migrate add user
We can list all migration scripts in migrations
direction, which is generated by sqlx migrate add user
command.
drwxr-xr-x - username 22 Aug 14:22 migrations
.rw-r--r-- 334 username 22 Aug 14:22 └── 20230822062052_user.sql
We added above sql to file 20230822062052_user.sql
and sqlx
will handle the migration.
Writing Data Access Layer
We can write an all_users
function to fetch data from database using sqlx
crate.
#![allow(unused)] fn main() { use serde::Serialize; use sqlx::PgPool; use time::OffsetDateTime; #[derive(Debug, Serialize)] pub struct User { pub id: i64, pub username: String, pub disabled: bool, pub gender: Gender, pub created_at: OffsetDateTime, } #[derive(Clone, PartialEq, PartialOrd, Serialize, sqlx::Type, Debug)] #[sqlx(type_name = "gender")] #[sqlx(rename_all = "lowercase")] pub enum Gender { Male, Female, Other, } impl User { pub async fn all(connection: &PgPool) -> Result<Vec<User>, sqlx::Error> { let users = sqlx::query_as!( User, r#" SELECT id, username, gender as "gender: _", disabled, created_at FROM users "# ) .fetch_all(connection) .await?; Ok(users) } } }
The code snippet above uses the sqlx
crate to interact with a PostgreSQL database and retrieve user data.
- The
User
struct represents a user entity and is serialized using the serde crate. It contains fields such asid
,username
,disabled
,gender
, andcreated_at
, representing the corresponding columns in the database table. - The
Gender
enum represents the possible genders a user can have. It is derived from Clone, PartialEq, PartialOrd, and Serialize. Thesqlx::Type
trait is implemented to specify that thisenum
should be treated as a PostgreSQL custom type named "gender". Thesqlx(rename_all)
attribute is used to specify that the enum variants should be serialized in lowercase. You can refer to rename_all for more details. If you don't specifysqlx(rename_all)
, an error will occur:
thread 'actix-rt|system:0|arbiter:0' panicked at 'called `Result::unwrap()` on an `Err` value: ColumnDecode { index: "2", source: "invalid value \"male\" for enum Gender" }', enum-example/src/bin/enum.rs:33:45
- The
User
struct also contains anall
function that retrieves all users from the database. It takes a reference to aPgPool
connection pool as a parameter and returns aResult
with a vector ofUser
instances or ansqlx::Error
if an error occurs. - Inside the
all
function, a SQL query is defined using thesqlx::query_as!
macro. It selects the necessary columns from theusers
table, including mapping thegender
column to theGender
enum using the as"gender: _"
syntax. - Finally, the
fetch_all
method is called on the query to execute it and retrieve all rows as a vector of User instances. The result is then returned as a Result.
Write actix handler
AppState
Once we have the code implemented, let's see how we can use it to retrieve user data from a PostgreSQL database.
First, we define AppState
struct to represent the server's state, which contains two fields: app_name
, a string representing the application name, and pool
, a PgPool
instance representing the connection pool to the PostgreSQL database.
You can also add more to AppState
, i.e. redis client to exchange data from Redis or kafka client to send or receive messages from Kafka, etc.
#![allow(unused)] fn main() { // This struct represents state struct AppState { app_name: String, pool: PgPool, } }
Actix handler
Then, we define a handler for retrieving all users.
#![allow(unused)] fn main() { async fn all_users(data: web::Data<AppState>) -> Result<impl Responder> { let connection = &data.pool; let users = User::all(connection).await.unwrap(); Ok(web::Json(users)) } }
The all_users
function is an asynchronous handler that retrieves all users from the database. It takes a web::Data
parameter containing the shared AppState
data. Inside the function, it accesses the PgPool
instance from the shared data and uses the User
model to fetch all users from the database asynchronously.
HttpServer setup
Next, we will create a PgPool
instance and store the pool in application state variable, pass in a Data::new(AppState { ... })
instance using app_data
method.
use sqlx::postgres::{PgPool, PgPoolOptions}; async fn main() -> std::io::Result<()> { env::set_var("RUST_LOG", "info"); env_logger::init(); let db_url = "postgres://localhost/test"; let pool = connect(db_url).await.unwrap(); HttpServer::new(move || { App::new() // .app_data(pool.clone()) .app_data(Data::new(AppState { app_name: "enum".into(), pool: pool.clone(), })) .service(web::scope("/api/v1").configure(config)) .route("/health", web::get().to(health)) }) .bind(("0.0.0.0", 8080))? .run() .await } /// Open a connection to a database pub async fn connect(db_url: &str) -> sqlx::Result<PgPool> { // NOTE: older version of sqlx use PgPool, for newer version use // PgPoolOptions::new to create a pool // // let pool = PgPool::new(db_url).await?; // Create a connection pool let pool = PgPoolOptions::new() .max_connections(5) // .connect("postgres://localhost/test") // .connect(&env::var("DATABASE_URL")?) .connect(db_url) .await?; Ok(pool) }
Config routes
Finally, we will configure routes for the application.
We can use configure
method to configure routes by passing an function
with F: FnOnce(&mut ServiceConfig)
trait bound like this:
#![allow(unused)] fn main() { HttpServer::new(move || { App::new() // .app_data(pool.clone()) .app_data(Data::new(AppState { app_name: "enum".into(), pool: pool.clone(), })) // config routers .service(web::scope("/api/v1").configure(config)) .route("/health", web::get().to(health)) }) .bind(("0.0.0.0", 8080))? .run() .await }
Here is the signature for configure
method:
#![allow(unused)] fn main() { pub fn configure<F>(mut self, cfg_fn: F) -> Self where F: FnOnce(&mut ServiceConfig); }
And our config method:
#![allow(unused)] fn main() { use actix_web::{ web::{self, Data, ServiceConfig}, web::{get, post, resource as r, scope}, App, Error, HttpRequest, HttpResponse, HttpServer, Responder, Result, }; // this function could be located in different module pub fn config(cfg: &mut ServiceConfig) { cfg // users .service(scope("/users").service( r("").route(get().to(all_users)), // .route(post().to(delete_user)), )); } }
Run the application
The application is configured with routes using the service
and route
methods. It includes a scope for API versioning with /api/v1
and sets up a route for a health check endpoint ("/health") and a route to retrieve all users ("/users").
With all things tied up, we can run application using cargo run
or cargo run --bin <binary>
if you have multiple binaries in you project:
Finished dev [unoptimized + debuginfo] target(s) in 2.73s
Running `target/debug/enum`
[2023-08-23T02:00:37Z INFO actix_server::builder] starting 10 workers
[2023-08-23T02:00:37Z INFO actix_server::server] Actix runtime found; starting in Actix runtime
Request data through api
Now it's time to test the api.
We can request the user data through /api/v1/users
endpoint:
curl '0:8080/api/v1/users' | jq
Output:
[
{
"id": 1,
"username": "john_doe",
"disabled": false,
"gender": "Male",
"created_at": [2023, 234, 15, 3, 34, 422482000, 0, 0, 0]
},
{
"id": 2,
"username": "jane_smith",
"disabled": true,
"gender": "Female",
"created_at": [2023, 234, 15, 3, 34, 422482000, 0, 0, 0]
},
{
"id": 3,
"username": "alex_jones",
"disabled": false,
"gender": "Other",
"created_at": [2023, 234, 15, 3, 34, 422482000, 0, 0, 0]
}
]
We have some trouble. The created_at
is returned as an array, which should be a string like this: 2023-08-22T15:03:34.422482Z
. How to solve this problem?
Choose correct serialize method
To fix the serialization problem of OffsetDataTime
data type in User
struct, we need to specify corrent serialization method for created_at
field.
We can use serce_with
crate and use Rfc3339
in serde_as
macro, which will serialize OffsetDataTime
like this 1985-04-12T23:20:50.52Z
instead of an array of integers [2023, 234, 15, 3, 34, 422482000, 0, 0, 0]
.
#![allow(unused)] fn main() { /// Well-known formats, typically standards. pub mod well_known { pub mod iso8601; mod rfc2822; mod rfc3339; #[doc(inline)] pub use iso8601::Iso8601; pub use rfc2822::Rfc2822; pub use rfc3339::Rfc3339; } }
You can use serde_with
crate as follows:
- Place the
#[serde_as]
attribute before the#[derive]
attribute. - Use
#[serde_as(as = "...")
] instead of#[serde(with = "...")]
to annotate field in struct
Below is an example of using serde_with
together with serde_as
for User
struct.
#![allow(unused)] fn main() { use time::format_description::well_known::Rfc3339; #[serde_with::serde_as] #[derive(Debug, Serialize)] pub struct User { pub id: i64, pub username: String, pub disabled: bool, pub gender: Gender, #[serde_as(as = "Rfc3339")] pub created_at: OffsetDateTime, } #[derive(Clone, PartialEq, PartialOrd, Serialize, sqlx::Type, Debug)] #[sqlx(type_name = "gender")] #[sqlx(rename_all = "lowercase")] pub enum Gender { Male, Female, Other, } }
Notice, we use #[serde_as(as = "Rfc3339")]
to annotate created_at
field with OffsetDataTime
type.
It's quite convenient to use.
Request data through api after using serde_as
Now, when we request the data, we get the datetime as we wanted.
curl '0:8080/api/v1/users' | jq
Output:
[
{
"id": 1,
"username": "john_doe",
"disabled": false,
"gender": "Male",
"created_at": "2023-08-22T15:03:34.422482Z"
},
{
"id": 2,
"username": "jane_smith",
"disabled": true,
"gender": "Female",
"created_at": "2023-08-22T15:03:34.422482Z"
},
{
"id": 3,
"username": "alex_jones",
"disabled": false,
"gender": "Other",
"created_at": "2023-08-22T15:03:34.422482Z"
}
]
🎉🎉🎉