XUtils

GRDB.swift

A versatile SQLite toolkit.


Demo Applications & Frequently Asked Questions

  • [Demo Applications]: Three flavors: vanilla UIKit, Combine + SwiftUI, and Async/Await + SwiftUI.
  • [FAQ]

Reference

SQLite and SQL

Records and the Query Interface

Application Tools

  • [Migrations]: Transform your database as your application evolves.
  • [Full-Text Search]: Perform efficient and customizable full-text searches.
  • [Database Observation]: Observe database changes and transactions.
  • Encryption: Encrypt your database with SQLCipher.
  • Backup: Dump the content of a database to another.
  • Interrupt a Database: Abort any pending database operation.
  • [Sharing a Database]: How to share an SQLite database between multiple processes - recommendations for App Group containers, App Extensions, App Sandbox, and file coordination.

Good to Know

CocoaPods

CocoaPods is a dependency manager for Xcode projects. To use GRDB with CocoaPods (version 1.2 or higher), specify in your Podfile:

pod 'GRDB.swift'

GRDB can be installed as a framework, or a static library.

Fetch Queries

[Database connections] let you fetch database rows, plain values, and custom models aka “records”.

Rows are the raw results of SQL queries:

try dbQueue.read { db in
    if let row = try Row.fetchOne(db, sql: "SELECT * FROM wine WHERE id = ?", arguments: [1]) {
        let name: String = row["name"]
        let color: Color = row["color"]
        print(name, color)
    }
}

Values are the Bool, Int, String, Date, Swift enums, etc. stored in row columns:

try dbQueue.read { db in
    let urls = try URL.fetchCursor(db, sql: "SELECT url FROM wine")
    while let url = try urls.next() {
        print(url)
    }
}

Records are your application objects that can initialize themselves from rows:

let wines = try dbQueue.read { db in
    try Wine.fetchAll(db, sql: "SELECT * FROM wine")
}

Fetching Methods

Throughout GRDB, you can always fetch cursors, arrays, sets, or single values of any fetchable type (database row, simple value, or custom record):

try Row.fetchCursor(...) // A Cursor of Row
try Row.fetchAll(...)    // [Row]
try Row.fetchSet(...)    // Set<Row>
try Row.fetchOne(...)    // Row?
  • fetchCursor returns a cursor over fetched values:

    let rows = try Row.fetchCursor(db, sql: "SELECT ...") // A Cursor of Row
    
  • fetchAll returns an array:

    let players = try Player.fetchAll(db, sql: "SELECT ...") // [Player]
    
  • fetchSet returns a set:

    let names = try String.fetchSet(db, sql: "SELECT ...") // Set<String>
    
  • fetchOne returns a single optional value, and consumes a single database row (if any).

    let count = try Int.fetchOne(db, sql: "SELECT COUNT(*) ...") // Int?
    

All those fetching methods require an SQL string that contains a single SQL statement. When you want to fetch from multiple statements joined with a semicolon, iterate the multiple [prepared statements] found in the SQL string.

Cursors

📖 Cursor

Whenever you consume several rows from the database, you can fetch an Array, a Set, or a Cursor.

The fetchAll() and fetchSet() methods return regular Swift array and sets, that you iterate like all other arrays and sets:

try dbQueue.read { db in
    // [Player]
    let players = try Player.fetchAll(db, sql: "SELECT ...")
    for player in players {
        // use player
    }
}

Unlike arrays and sets, cursors returned by fetchCursor() load their results step after step:

try dbQueue.read { db in
    // Cursor of Player
    let players = try Player.fetchCursor(db, sql: "SELECT ...")
    while let player = try players.next() {
        // use player
    }
}
  • Cursors can not be used on any thread: you must consume a cursor on the dispatch queue it was created in. Particularly, don’t extract a cursor out of a database access method:

    // Wrong
    let cursor = try dbQueue.read { db in
        try Player.fetchCursor(db, ...)
    }
    while let player = try cursor.next() { ... }
    

    Conversely, arrays and sets may be consumed on any thread:

    // OK
    let array = try dbQueue.read { db in
        try Player.fetchAll(db, ...)
    }
    for player in array { ... }
    
  • Cursors can be iterated only one time. Arrays and sets can be iterated many times.

  • Cursors iterate database results in a lazy fashion, and don’t consume much memory. Arrays and sets contain copies of database values, and may take a lot of memory when there are many fetched results.

  • Cursors are granted with direct access to SQLite, unlike arrays and sets that have to take the time to copy database values. If you look after extra performance, you may prefer cursors.

  • Cursors can feed Swift collections.

    You will most of the time use fetchAll or fetchSet when you want an array or a set. For more specific needs, you may prefer one of the initializers below. All of them accept an extra optional minimumCapacity argument which helps optimizing your app when you have an idea of the number of elements in a cursor (the built-in fetchAll and fetchSet do not perform such an optimization).

    Arrays and all types conforming to RangeReplaceableCollection:

    // [String]
    let cursor = try String.fetchCursor(db, ...)
    let array = try Array(cursor)
    

    Sets:

    // Set<Int>
    let cursor = try Int.fetchCursor(db, ...)
    let set = try Set(cursor)
    

    Dictionaries:

    // [Int64: [Player]]
    let cursor = try Player.fetchCursor(db)
    let dictionary = try Dictionary(grouping: cursor, by: { $0.teamID })
    
    
    // [Int64: Player]
    let cursor = try Player.fetchCursor(db).map { ($0.id, $0) }
    let dictionary = try Dictionary(uniqueKeysWithValues: cursor)
    
  • Cursors adopt the Cursor protocol, which looks a lot like standard lazy sequences of Swift. As such, cursors come with many convenience methods: compactMap, contains, dropFirst, dropLast, drop(while:), enumerated, filter, first, flatMap, forEach, joined, joined(separator:), max, max(by:), min, min(by:), map, prefix, prefix(while:), reduce, reduce(into:), suffix:

    // Prints all Github links
    try URL
        .fetchCursor(db, sql: "SELECT url FROM link")
        .filter { url in url.host == "github.com" }
        .forEach { url in print(url) }
    
    
    // An efficient cursor of coordinates:
    let locations = try Row.
        .fetchCursor(db, sql: "SELECT latitude, longitude FROM place")
        .map { row in
            CLLocationCoordinate2D(latitude: row[0], longitude: row[1])
        }
    
  • Cursors are not Swift sequences. That’s because Swift sequences can’t handle iteration errors, when reading SQLite results may fail at any time.

  • Cursors require a little care:

    • Don’t modify the results during a cursor iteration:

      // Undefined behavior
      while let player = try players.next() {
          try db.execute(sql: "DELETE ...")
      }
      
    • Don’t turn a cursor of Row into an array or a set. You would not get the distinct rows you expect. To get a array of rows, use Row.fetchAll(...). To get a set of rows, use Row.fetchSet(...). Generally speaking, make sure you copy a row whenever you extract it from a cursor for later use: row.copy().

If you don’t see, or don’t care about the difference, use arrays. If you care about memory and performance, use cursors when appropriate.

Row Queries

Data (and Memory Savings)

Data suits the BLOB SQLite columns. It can be stored and fetched from the database just like other values:

let rows = try Row.fetchCursor(db, sql: "SELECT data, ...")
while let row = try rows.next() {
    let data: Data = row["data"]
}

At each step of the request iteration, the row[] subscript creates two copies of the database bytes: one fetched by SQLite, and another, stored in the Swift Data value.

You have the opportunity to save memory by not copying the data fetched by SQLite:

while let row = try rows.next() {
    try row.withUnsafeData(name: "data") { (data: Data?) in
        ...
    }
}

The non-copied data does not live longer than the iteration step: make sure that you do not use it past this point.

Date

Date can be stored and fetched from the database just like other values:

try db.execute(
    sql: "INSERT INTO player (creationDate, ...) VALUES (?, ...)",
    arguments: [Date(), ...])

let row = try Row.fetchOne(db, ...)!
let creationDate: Date = row["creationDate"]

Dates are stored using the format “YYYY-MM-DD HH:MM:SS.SSS” in the UTC time zone. It is precise to the millisecond.

Note: this format was chosen because it is the only format that is:

  • Comparable (ORDER BY date works)
  • Comparable with the SQLite keyword CURRENT_TIMESTAMP (WHERE date > CURRENT_TIMESTAMP works)
  • Able to feed SQLite date & time functions
  • Precise enough

Warning: the range of valid years in the SQLite date format is 0000-9999. You will experience problems with years outside of this range, such as decoding errors, or invalid date computations with SQLite date & time functions.

Some applications may prefer another date format:

  • Some may prefer ISO-8601, with a T separator.
  • Some may prefer ISO-8601, with a time zone.
  • Some may need to store years beyond the 0000-9999 range.
  • Some may need sub-millisecond precision.
  • Some may need exact Date roundtrip.
  • Etc.

You should think twice before choosing a different date format:

  • ISO-8601 is about exchange and communication, when SQLite is about storage and data manipulation. Sharing the same representation in your database and in JSON files only provides a superficial convenience, and should be the least of your priorities. Don’t store dates as ISO-8601 without understanding what you lose. For example, ISO-8601 time zones forbid database-level date comparison.
  • Sub-millisecond precision and exact Date roundtrip are not as obvious needs as it seems at first sight. Dates generally don’t precisely roundtrip as soon as they leave your application anyway, because the other systems your app communicates with use their own date representation (the Android version of your app, the server your application is talking to, etc.) On top of that, Date comparison is at least as hard and nasty as floating point comparison.

The customization of date format is explicit. For example:

let date = Date()
let timeInterval = date.timeIntervalSinceReferenceDate
try db.execute(
    sql: "INSERT INTO player (creationDate, ...) VALUES (?, ...)",
    arguments: [timeInterval, ...])

if let row = try Row.fetchOne(db, ...) {
    let timeInterval: TimeInterval = row["creationDate"]
    let creationDate = Date(timeIntervalSinceReferenceDate: timeInterval)
}

See also [Codable Records] for more date customization options, and [DatabaseValueConvertible] if you want to define a Date-wrapping type with customized database representation.

UUID

UUID can be stored and fetched from the database just like other values.

GRDB stores uuids as 16-bytes data blobs, and decodes them from both 16-bytes data blobs and strings such as “E621E1F8-C36C-495A-93FC-0C247A3E6E5F”.

Swift Enums

Swift enums and generally all types that adopt the RawRepresentable protocol can be stored and fetched from the database just like their raw values:

enum Color : Int {
    case red, white, rose
}

enum Grape : String {
    case chardonnay, merlot, riesling
}

// Declare empty DatabaseValueConvertible adoption
extension Color : DatabaseValueConvertible { }
extension Grape : DatabaseValueConvertible { }

// Store
try db.execute(
    sql: "INSERT INTO wine (grape, color) VALUES (?, ?)",
    arguments: [Grape.merlot, Color.red])

// Read
let rows = try Row.fetchCursor(db, sql: "SELECT * FROM wine")
while let row = try rows.next() {
    let grape: Grape = row["grape"]
    let color: Color = row["color"]
}

When a database value does not match any enum case, you get a fatal error. This fatal error can be avoided with the DatabaseValue type:

let row = try Row.fetchOne(db, sql: "SELECT 'syrah'")!

row[0] as String  // "syrah"
row[0] as Grape?  // fatal error: could not convert "syrah" to Grape.
row[0] as Grape   // fatal error: could not convert "syrah" to Grape.

let dbValue: DatabaseValue = row[0]
if dbValue.isNull {
    // Handle NULL
} else if let grape = Grape.fromDatabaseValue(dbValue) {
    // Handle valid grape
} else {
    // Handle unknown grape
}

Custom SQL Functions and Aggregates

SQLite lets you define SQL functions and aggregates.

A custom SQL function or aggregate extends SQLite:

SELECT reverse(name) FROM player;   -- custom function
SELECT maxLength(name) FROM player; -- custom aggregate

Custom SQL Functions

📖 DatabaseFunction

A function argument takes an array of DatabaseValue, and returns any valid value (Bool, Int, String, Date, Swift enums, etc.) The number of database values is guaranteed to be argumentCount.

SQLite has the opportunity to perform additional optimizations when functions are “pure”, which means that their result only depends on their arguments. So make sure to set the pure argument to true when possible.

let reverse = DatabaseFunction("reverse", argumentCount: 1, pure: true) { (values: [DatabaseValue]) in
    // Extract string value, if any...
    guard let string = String.fromDatabaseValue(values[0]) else {
        return nil
    }
    // ... and return reversed string:
    return String(string.reversed())
}

You make a function available to a database connection through its configuration:

var config = Configuration()
config.prepareDatabase { db in
    db.add(function: reverse)
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

try dbQueue.read { db in
    // "oof"
    try String.fetchOne(db, sql: "SELECT reverse('foo')")!
}

Functions can take a variable number of arguments:

When you don’t provide any explicit argumentCount, the function can take any number of arguments:

let averageOf = DatabaseFunction("averageOf", pure: true) { (values: [DatabaseValue]) in
    let doubles = values.compactMap { Double.fromDatabaseValue($0) }
    return doubles.reduce(0, +) / Double(doubles.count)
}
db.add(function: averageOf)

// 2.0
try Double.fetchOne(db, sql: "SELECT averageOf(1, 2, 3)")!

Functions can throw:

let sqrt = DatabaseFunction("sqrt", argumentCount: 1, pure: true) { (values: [DatabaseValue]) in
    guard let double = Double.fromDatabaseValue(values[0]) else {
        return nil
    }
    guard double >= 0 else {
        throw DatabaseError(message: "invalid negative number")
    }
    return sqrt(double)
}
db.add(function: sqrt)

// SQLite error 1 with statement `SELECT sqrt(-1)`: invalid negative number
try Double.fetchOne(db, sql: "SELECT sqrt(-1)")!

Use custom functions in the query interface:

// SELECT reverseString("name") FROM player
Player.select(reverseString(nameColumn))

GRDB ships with built-in SQL functions that perform unicode-aware string transformations. See Unicode.

Custom Aggregates

📖 DatabaseFunction, DatabaseAggregate

Before registering a custom aggregate, you need to define a type that adopts the DatabaseAggregate protocol:

protocol DatabaseAggregate {
    // Initializes an aggregate
    init()
    
    // Called at each step of the aggregation
    mutating func step(_ dbValues: [DatabaseValue]) throws
    
    // Returns the final result
    func finalize() throws -> DatabaseValueConvertible?
}

For example:

struct MaxLength : DatabaseAggregate {
    var maxLength: Int = 0
    
    mutating func step(_ dbValues: [DatabaseValue]) {
        // At each step, extract string value, if any...
        guard let string = String.fromDatabaseValue(dbValues[0]) else {
            return
        }
        // ... and update the result
        let length = string.count
        if length > maxLength {
            maxLength = length
        }
    }
    
    func finalize() -> DatabaseValueConvertible? {
        maxLength
    }
}

let maxLength = DatabaseFunction(
    "maxLength",
    argumentCount: 1,
    pure: true,
    aggregate: MaxLength.self)

Like custom SQL Functions, you make an aggregate function available to a database connection through its configuration:

var config = Configuration()
config.prepareDatabase { db in
    db.add(function: maxLength)
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

try dbQueue.read { db in
    // Some Int
    try Int.fetchOne(db, sql: "SELECT maxLength(name) FROM player")!
}

The step method of the aggregate takes an array of DatabaseValue. This array contains as many values as the argumentCount parameter (or any number of values, when argumentCount is omitted).

The finalize method of the aggregate returns the final aggregated value (Bool, Int, String, Date, Swift enums, etc.).

SQLite has the opportunity to perform additional optimizations when aggregates are “pure”, which means that their result only depends on their inputs. So make sure to set the pure argument to true when possible.

Use custom aggregates in the query interface:

// SELECT maxLength("name") FROM player
let request = Player.select(maxLength.apply(nameColumn))
try Int.fetchOne(db, request) // Int?

Database Schema Introspection

GRDB comes with a set of schema introspection methods:

try dbQueue.read { db in
    // Bool, true if the table exists
    try db.tableExists("player")
    
    // [ColumnInfo], the columns in the table
    try db.columns(in: "player")
    
    // PrimaryKeyInfo
    try db.primaryKey("player")
    
    // [ForeignKeyInfo], the foreign keys defined on the table
    try db.foreignKeys(on: "player")
    
    // [IndexInfo], the indexes defined on the table
    try db.indexes(on: "player")
    
    // Bool, true if column(s) is a unique key (primary key or unique index)
    try db.table("player", hasUniqueKey: ["email"])
}

// Bool, true if argument is the name of an internal SQLite table
Database.isSQLiteInternalTable(...)

// Bool, true if argument is the name of an internal GRDB table
Database.isGRDBInternalTable(...)

For more information, see tableExists(_:) and related methods.

Raw SQLite Pointers

If not all SQLite APIs are exposed in GRDB, you can still use the SQLite C Interface and call SQLite C functions.

Those functions are embedded right into the GRDB module, regardless of the underlying SQLite implementation (system SQLite, SQLCipher, or [custom SQLite build]):

import GRDB

let sqliteVersion = String(cString: sqlite3_libversion())

Raw pointers to database connections and statements are available through the Database.sqliteConnection and Statement.sqliteStatement properties:

try dbQueue.read { db in
    // The raw pointer to a database connection:
    let sqliteConnection = db.sqliteConnection

    // The raw pointer to a statement:
    let statement = try db.makeStatement(sql: "SELECT ...")
    let sqliteStatement = statement.sqliteStatement
}

Note

  • Those pointers are owned by GRDB: don’t close connections or finalize statements created by GRDB.
  • GRDB opens SQLite connections in the “multi-thread mode”, which (oddly) means that they are not thread-safe. Make sure you touch raw databases and statements inside their dedicated dispatch queues.
  • Use the raw SQLite C Interface at your own risk. GRDB won’t prevent you from shooting yourself in the foot.

Records

On top of the SQLite API, GRDB provides protocols and a class that help manipulating database rows as regular objects named “records”:

try dbQueue.write { db in
    if var place = try Place.fetchOne(db, id: 1) {
        place.isFavorite = true
        try place.update(db)
    }
}

Of course, you need to open a [database connection], and create database tables first.

To define your custom records, you subclass the ready-made Record class, or you extend your structs and classes with protocols that come with focused sets of features: fetching methods, persistence methods, record comparison…

Extending structs with record protocols is more “swifty”. Subclassing the Record class is more “classic”. You can choose either way. See some examples of record definitions, and the list of record methods for an overview.

Note: if you are familiar with Core Data’s NSManagedObject or Realm’s Object, you may experience a cultural shock: GRDB records are not uniqued, do not auto-update, and do not lazy-load. This is both a purpose, and a consequence of protocol-oriented programming. You should read How to build an iOS application with SQLite and GRDB.swift for a general introduction.
:bulb: Tip: after you have read this chapter, check the Recommended Practices for Designing Record Types Guide.

:bulb: Tip: see the [Demo Applications] for sample apps that uses records.

Overview

Protocols and the Record Class

Records in a Glance

Inserting Records

To insert a record in the database, call the insert method:

let player = Player(name: "Arthur", email: "arthur@example.com")
try player.insert(db)

:point_right: insert is available for subclasses of the Record class, and types that adopt the [PersistableRecord] protocol.

Fetching Records

To fetch records from the database, call a fetching method:

let arthur = try Player.fetchOne(db,            // Player?
    sql: "SELECT * FROM players WHERE name = ?",
    arguments: ["Arthur"])

let bestPlayers = try Player                    // [Player]
    .order(Column("score").desc)
    .limit(10)
    .fetchAll(db)
    
let spain = try Country.fetchOne(db, id: "ES")  // Country?
let italy = try Country.find(db, id: "IT")      // Country
:point_right: Fetching from raw SQL is available for subclasses of the Record class, and types that adopt the [FetchableRecord] protocol.

:point_right: Fetching without SQL, using the query interface, is available for subclasses of the Record class, and types that adopt both [FetchableRecord] and [TableRecord] protocol.

Updating Records

To update a record in the database, call the update method:

var player: Player = ...
player.score = 1000
try player.update(db)

It is possible to avoid useless updates:

// does not hit the database if score has not changed
try player.updateChanges(db) {
    $0.score = 1000
}

See the query interface for batch updates:

try Player
    .filter(Column("team") == "red")
    .updateAll(db, Column("score") += 1)

:point_right: update methods are available for subclasses of the Record class, and types that adopt the [PersistableRecord] protocol. Batch updates are available on the [TableRecord] protocol.

Deleting Records

To delete a record in the database, call the delete method:

let player: Player = ...
try player.delete(db)

You can also delete by primary key, unique key, or perform batch deletes (see Delete Requests):

try Player.deleteOne(db, id: 1)
try Player.deleteOne(db, key: ["email": "arthur@example.com"])
try Country.deleteAll(db, ids: ["FR", "US"])
try Player
    .filter(Column("email") == nil)
    .deleteAll(db)

:point_right: delete methods are available for subclasses of the Record class, and types that adopt the [PersistableRecord] protocol. Batch deletes are available on the [TableRecord] protocol.

Counting Records

To count records, call the fetchCount method:

let playerCount: Int = try Player.fetchCount(db)

let playerWithEmailCount: Int = try Player
    .filter(Column("email") == nil)
    .fetchCount(db)

:point_right: fetchCount is available for subclasses of the Record class, and types that adopt the [TableRecord] protocol.

Details follow:

Record Protocols Overview

GRDB ships with three record protocols. Your own types will adopt one or several of them, according to the abilities you want to extend your types with.

  • [FetchableRecord] is able to decode database rows.

    struct Place: FetchableRecord { ... }
    let places = try dbQueue.read { db in
        try Place.fetchAll(db, sql: "SELECT * FROM place")
    }
    

    :bulb: Tip: FetchableRecord can derive its implementation from the standard Decodable protocol. See [Codable Records] for more information.

    FetchableRecord can decode database rows, but it is not able to build SQL requests for you. For that, you also need TableRecord:

  • [TableRecord] is able to generate SQL queries:

    struct Place: TableRecord { ... }
    let placeCount = try dbQueue.read { db in
        // Generates and runs `SELECT COUNT(*) FROM place`
        try Place.fetchCount(db)
    }
    

    When a type adopts both TableRecord and FetchableRecord, it can load from those requests:

    struct Place: TableRecord, FetchableRecord { ... }
    try dbQueue.read { db in
        let places = try Place.order(Column("title")).fetchAll(db)
        let paris = try Place.fetchOne(id: 1)
    }
    
  • [PersistableRecord] is able to write: it can create, update, and delete rows in the database:

    struct Place : PersistableRecord { ... }
    try dbQueue.write { db in
        try Place.delete(db, id: 1)
        try Place(...).insert(db)
    }
    

    A persistable record can also compare itself against other records, and avoid useless database updates.

    :bulb: Tip: PersistableRecord can derive its implementation from the standard Encodable protocol. See [Codable Records] for more information.

TableRecord Protocol

📖 TableRecord

The TableRecord protocol generates SQL for you. To use TableRecord, subclass the Record class, or adopt it explicitly:

protocol TableRecord {
    static var databaseTableName: String { get }
    static var databaseSelection: [any SQLSelectable] { get }
}

The databaseSelection type property is optional, and documented in the [Columns Selected by a Request] chapter.

The databaseTableName type property is the name of a database table. By default, it is derived from the type name:

struct Place: TableRecord { }
print(Place.databaseTableName) // prints "place"

For example:

  • Place: place
  • Country: country
  • PostalAddress: postalAddress
  • HTTPRequest: httpRequest
  • TOEFL: toefl

You can still provide a custom table name:

struct Place: TableRecord {
    static let databaseTableName = "location"
}
print(Place.databaseTableName) // prints "location"

Subclasses of the Record class must always override their superclass’s databaseTableName property:

class Place: Record {
    override class var databaseTableName: String { "place" }
}
print(Place.databaseTableName) // prints "place"

When a type adopts both TableRecord and FetchableRecord, it can be fetched using the query interface:

// SELECT * FROM place WHERE name = 'Paris'
let paris = try Place.filter(nameColumn == "Paris").fetchOne(db)

TableRecord can also fetch deal with primary and unique keys: see Fetching by Key and Testing for Record Existence.

PersistableRecord Protocol

📖 EncodableRecord, MutablePersistableRecord, PersistableRecord

GRDB record types can create, update, and delete rows in the database.

Those abilities are granted by three protocols:

// Defines how a record encodes itself into the database
protocol EncodableRecord {
    /// Defines the values persisted in the database
    func encode(to container: inout PersistenceContainer) throws
}

// Adds persistence methods
protocol MutablePersistableRecord: TableRecord, EncodableRecord {
    /// Optional method that lets your adopting type store its rowID upon
    /// successful insertion. Don't call it directly: it is called for you.
    mutating func didInsert(_ inserted: InsertionSuccess)
}

// Adds immutability
protocol PersistableRecord: MutablePersistableRecord {
    /// Non-mutating version of the optional didInsert(_:)
    func didInsert(_ inserted: InsertionSuccess)
}

Yes, three protocols instead of one. Here is how you pick one or the other:

  • If your type is a class, choose PersistableRecord. On top of that, implement didInsert(_:) if the database table has an auto-incremented primary key.

  • If your type is a struct, and the database table has an auto-incremented primary key, choose MutablePersistableRecord, and implement didInsert(_:).

  • Otherwise, choose PersistableRecord, and ignore didInsert(_:).

The encode(to:) method defines which values (Bool, Int, String, Date, Swift enums, etc.) are assigned to database columns.

The optional didInsert method lets the adopting type store its rowID after successful insertion, and is only useful for tables that have an auto-incremented primary key. It is called from a protected dispatch queue, and serialized with all database updates.

To use the persistable protocols, subclass the Record class, or adopt one of them explicitly. For example:

extension Place : MutablePersistableRecord {
    /// The values persisted in the database
    func encode(to container: inout PersistenceContainer) {
        container["id"] = id
        container["title"] = title
        container["latitude"] = coordinate.latitude
        container["longitude"] = coordinate.longitude
    }
    
    // Update auto-incremented id upon successful insertion
    mutating func didInsert(_ inserted: InsertionSuccess) {
        id = inserted.rowID
    }
}

var paris = Place(
    id: nil,
    title: "Paris",
    coordinate: CLLocationCoordinate2D(latitude: 48.8534100, longitude: 2.3488000))

try paris.insert(db)
paris.id   // some value

Persistence containers also accept column enums:

extension Place : MutablePersistableRecord {
    enum Columns: String, ColumnExpression {
        case id, title, latitude, longitude
    }
    
    func encode(to container: inout PersistenceContainer) {
        container[Columns.id] = id
        container[Columns.title] = title
        container[Columns.latitude] = coordinate.latitude
        container[Columns.longitude] = coordinate.longitude
    }
}

When your record type adopts the standard Encodable protocol, you don’t have to provide the implementation for encode(to:). See [Codable Records] for more information:

// That's all
struct Player: Encodable, MutablePersistableRecord {
    var id: Int64?
    var name: String
    var score: Int
    
    // Update auto-incremented id upon successful insertion
    mutating func didInsert(_ inserted: InsertionSuccess) {
        id = inserted.rowID
    }
}

Upsert

UPSERT is an SQLite feature that causes an INSERT to behave as an UPDATE or a no-op if the INSERT would violate a uniqueness constraint (primary key or unique index).

Note: Upsert apis are available from SQLite 3.35.0+: iOS 15.0+, macOS 12.0+, tvOS 15.0+, watchOS 8.0+, or with a [custom SQLite build] or SQLCipher.

Note: With regard to persistence callbacks, an upsert behaves exactly like an insert. In particular: the aroundInsert(_:) and didInsert(_:) callbacks reports the rowid of the inserted or updated row; willUpdate, aroundUdate, didUdate are not called.

[PersistableRecord] provides three upsert methods:

  • upsert(_:)

    Inserts or updates a record.

    The upsert behavior is triggered by a violation of any uniqueness constraint on the table (primary key or unique index). In case of conflict, all columns but the primary key are overwritten with the inserted values:

    struct Player: Encodable, PersistableRecord {
        var id: Int64
        var name: String
        var score: Int
    }
    
    
    // INSERT INTO player (id, name, score)
    // VALUES (1, 'Arthur', 1000)
    // ON CONFLICT DO UPDATE SET
    //   name = excluded.name,
    //   score = excluded.score
    let player = Player(id: 1, name: "Arthur", score: 1000)
    try player.upsert(db)
    
  • upsertAndFetch(_:onConflict:doUpdate:) (requires [FetchableRecord] conformance)

    Inserts or updates a record, and returns the upserted record.

    The onConflict and doUpdate arguments let you further control the upsert behavior. Make sure you check the SQLite UPSERT documentation for detailed information.

    • onConflict: the “conflict target” is the array of columns in the uniqueness constraint (primary key or unique index) that triggers the upsert.

      If empty (the default), all uniqueness constraint are considered.

    • doUpdate: a closure that returns columns assignments to perform in case of conflict. Other columns are overwritten with the inserted values.

      By default, all inserted columns but the primary key and the conflict target are overwritten.

    In the example below, we upsert the new vocabulary word “jovial”. It is inserted if that word is not already in the dictionary. Otherwise, count is incremented, isTainted is not overwritten, and kind is overwritten:

    // CREATE TABLE vocabulary(
    //   word TEXT NOT NULL PRIMARY KEY,
    //   kind TEXT NOT NULL,
    //   isTainted BOOLEAN DEFAULT 0,
    //   count INT DEFAULT 1))
    struct Vocabulary: Encodable, PersistableRecord {
        var word: String
        var kind: String
        var isTainted: Bool
    }
    
    
    // INSERT INTO vocabulary(word, kind, isTainted)
    // VALUES('jovial', 'adjective', 0)
    // ON CONFLICT(word) DO UPDATE SET \
    //   count = count + 1,   -- on conflict, count is incremented
    //   kind = excluded.kind -- on conflict, kind is overwritten
    // RETURNING *
    let vocabulary = Vocabulary(word: "jovial", kind: "adjective", isTainted: false)
    let upserted = try vocabulary.upsertAndFetch(
        db, onConflict: ["word"],
        doUpdate: { _ in
            [Column("count") += 1,            // on conflict, count is incremented
             Column("isTainted").noOverwrite] // on conflict, isTainted is NOT overwritten
        })
    

    The doUpdate closure accepts an excluded TableAlias argument that refers to the inserted values that trigger the conflict. You can use it to specify an explicit overwrite, or to perform a computation. In the next example, the upsert keeps the maximum date in case of conflict:

    // INSERT INTO message(id, text, date)
    // VALUES(...)
    // ON CONFLICT DO UPDATE SET \
    //   text = excluded.text,
    //   date = MAX(date, excluded.date)
    // RETURNING *
    let upserted = try message.upsertAndFetch(doUpdate: { excluded in
        // keep the maximum date in case of conflict
        [Column("date").set(to: max(Column("date"), excluded["date"]))]
    })
    
  • upsertAndFetch(_:as:onConflict:doUpdate:) (does not require [FetchableRecord] conformance)

    This method is identical to upsertAndFetch(_:onConflict:doUpdate:) described above, but you can provide a distinct [FetchableRecord] record type as a result, in order to specify the returned columns.

Persistence Callbacks

Your custom type may want to perform extra work when the persistence methods are invoked.

To this end, your record type can implement persistence callbacks. Callbacks are methods that get called at certain moments of a record’s life cycle. With callbacks it is possible to write code that will run whenever an record is inserted, updated, or deleted.

In order to use a callback method, you need to provide its implementation. For example, a frequently used callback is didInsert, in the case of auto-incremented database ids:

struct Player: MutablePersistableRecord {
    var id: Int64?
    
    // Update auto-incremented id upon successful insertion
    mutating func didInsert(_ inserted: InsertionSuccess) {
        id = inserted.rowID
    }
}

try dbQueue.write { db in
    var player = Player(id: nil, ...)
    try player.insert(db)
    print(player.id) // didInsert was called: prints some non-nil id
}

When you subclass the Record class, override the callback, and make sure you call super at some point of your implementation:

class Player: Record {
    var id: Int64?
    
    // Update auto-incremented id upon successful insertion
    func didInsert(_ inserted: InsertionSuccess) {
        super.didInsert(inserted)
        id = inserted.rowID
    }
}

Callbacks can also help implementing record validation:

struct Link: PersistableRecord {
    var url: URL
    
    func willSave(_ db: Database) throws {
        if url.host == nil {
            throw ValidationError("url must be absolute.")
        }
    }
}

try link.insert(db) // Calls the willSave callback
try link.update(db) // Calls the willSave callback
try link.save(db)   // Calls the willSave callback
try link.upsert(db) // Calls the willSave callback

Available Callbacks

Here is a list with all the available [persistence callbacks], listed in the same order in which they will get called during the respective operations:

  • Inserting a record (all record.insert and record.upsert methods)

    • willSave
    • aroundSave
    • willInsert
    • aroundInsert
    • didInsert
    • didSave
  • Updating a record (all record.update methods)

    • willSave
    • aroundSave
    • willUpdate
    • aroundUpdate
    • didUpdate
    • didSave
  • Deleting a record (only the record.delete(_:) method)

    • willDelete
    • aroundDelete
    • didDelete

For detailed information about each callback, check the reference.

In the MutablePersistableRecord protocol, willInsert and didInsert are mutating methods. In PersistableRecord, they are not mutating.

Note: The record.save(_:) method performs an UPDATE if the record has a non-null primary key, and then, if no row was modified, an INSERT. It directly performs an INSERT if the record has no primary key, or a null primary key. It triggers update and/or insert callbacks accordingly.

Warning: Callbacks are only invoked from persistence methods called on record instances. Callbacks are not invoked when you call a type method, perform a batch operations, or execute raw SQL.

Warning: When a did*** callback is invoked, do not assume that the change is actually persisted on disk, because the database may still be inside an uncommitted transaction. When you need to handle transaction completions, use the afterNextTransaction(onCommit:onRollback:). For example:

> struct PictureFile: PersistableRecord {
>     var path: String
>     
>     func willDelete(_ db: Database) {
>         db.afterNextTransaction { _ in
>             try? deleteFileOnDisk()
>         }
>     }
> }
> ```


## Identifiable Records

**When a record type maps a table with a single-column primary key, it is recommended to have it adopt the standard [Identifiable] protocol.**

```swift
struct Player: Identifiable, FetchableRecord, PersistableRecord {
    var id: Int64 // fulfills the Identifiable requirement
    var name: String
    var score: Int
}

When id has a database-compatible type (Int64, Int, String, UUID, …), the Identifiable conformance unlocks type-safe record and request methods:

let player = try Player.find(db, id: 1)               // Player
let player = try Player.fetchOne(db, id: 1)           // Player?
let players = try Player.fetchAll(db, ids: [1, 2, 3]) // [Player]
let players = try Player.fetchSet(db, ids: [1, 2, 3]) // Set<Player>

let request = Player.filter(id: 1)
let request = Player.filter(ids: [1, 2, 3])

try Player.deleteOne(db, id: 1)
try Player.deleteAll(db, ids: [1, 2, 3])

Note: Identifiable is not available on all application targets, and not all tables have a single-column primary key. GRDB provides other methods that deal with primary and unique keys, but they won’t check the type of their arguments:

> // Available on non-Identifiable types
> try Player.fetchOne(db, key: 1)
> try Player.fetchOne(db, key: ["email": "arthur@example.com"])
> try Country.fetchAll(db, keys: ["FR", "US"])
> try Citizenship.fetchOne(db, key: ["citizenId": 1, "countryCode": "FR"])
> 
> let request = Player.filter(key: 1)
> let request = Player.filter(keys: [1, 2, 3])
> 
> try Player.deleteOne(db, key: 1)
> try Player.deleteAll(db, keys: [1, 2, 3])
> ```

> **Note**: It is not recommended to use `Identifiable` on record types that use an auto-incremented primary key:
>
> ```swift
> // AVOID declaring Identifiable conformance when key is auto-incremented
> struct Player {
>     var id: Int64? // Not an id suitable for Identifiable
>     var name: String
>     var score: Int
> }
> 
> extension Player: FetchableRecord, MutablePersistableRecord {
>     // Update auto-incremented id upon successful insertion
>     mutating func didInsert(_ inserted: InsertionSuccess) {
>         id = inserted.rowID
>     }
> }
> ```
>
> For a detailed rationale, please see [issue #1435](https://github.com/groue/GRDB.swift/issues/1435#issuecomment-1740857712).

Some database tables have a single-column primary key which is not called "id":

```swift
try db.create(table: "country") { t in
    t.primaryKey("isoCode", .text)
    t.column("name", .text).notNull()
    t.column("population", .integer).notNull()
}

In this case, Identifiable conformance can be achieved, for example, by returning the primary key column from the id property:

struct Country: Identifiable, FetchableRecord, PersistableRecord {
    var isoCode: String
    var name: String
    var population: Int
    
    // Fulfill the Identifiable requirement
    var id: String { isoCode }
}

let france = try dbQueue.read { db in
    try Country.fetchOne(db, id: "FR")
}

JSON Columns

When a Codable record contains a property that is not a simple value (Bool, Int, String, Date, Swift enums, etc.), that value is encoded and decoded as a JSON string. For example:

enum AchievementColor: String, Codable {
    case bronze, silver, gold
}

struct Achievement: Codable {
    var name: String
    var color: AchievementColor
}

struct Player: Codable, FetchableRecord, PersistableRecord {
    var name: String
    var score: Int
    var achievements: [Achievement] // stored in a JSON column
}

try dbQueue.write { db in
    // INSERT INTO player (name, score, achievements)
    // VALUES (
    //   'Arthur',
    //   100,
    //   '[{"color":"gold","name":"Use Codable Records"}]')
    let achievement = Achievement(name: "Use Codable Records", color: .gold)
    let player = Player(name: "Arthur", score: 100, achievements: [achievement])
    try player.insert(db)
}

GRDB uses the standard JSONDecoder and JSONEncoder from Foundation. By default, Data values are handled with the .base64 strategy, Date with the .millisecondsSince1970 strategy, and non conforming floats with the .throw strategy.

You can customize the JSON format by implementing those methods:

protocol FetchableRecord {
    static func databaseJSONDecoder(for column: String) -> JSONDecoder
}

protocol EncodableRecord {
    static func databaseJSONEncoder(for column: String) -> JSONEncoder
}

:bulb: Tip: Make sure you set the JSONEncoder sortedKeys option. This option makes sure that the JSON output is stable. This stability is required for [Record Comparison] to work as expected, and database observation tools such as [ValueObservation] to accurately recognize changed records.

Column Names Coding Strategies

By default, [Codable Records] store their values into database columns that match their coding keys: the teamID property is stored into the teamID column.

This behavior can be overridden, so that you can, for example, store the teamID property into the team_id column:

protocol FetchableRecord {
    static var databaseColumnDecodingStrategy: DatabaseColumnDecodingStrategy { get }
}

protocol EncodableRecord {
    static var databaseColumnEncodingStrategy: DatabaseColumnEncodingStrategy { get }
}

See DatabaseColumnDecodingStrategy and DatabaseColumnEncodingStrategy to learn about all available strategies.

Data, Date, and UUID Coding Strategies

By default, [Codable Records] encode and decode their Data properties as blobs, and Date and UUID properties as described in the general Date and DateComponents and UUID chapters.

To sum up: dates encode themselves in the “YYYY-MM-DD HH:MM:SS.SSS” format, in the UTC time zone, and decode a variety of date formats and timestamps. UUIDs encode themselves as 16-bytes data blobs, and decode both 16-bytes data blobs and strings such as “E621E1F8-C36C-495A-93FC-0C247A3E6E5F”.

Those behaviors can be overridden:

protocol FetchableRecord {
    static var databaseDataDecodingStrategy: DatabaseDataDecodingStrategy { get }
    static var databaseDateDecodingStrategy: DatabaseDateDecodingStrategy { get }
}

protocol EncodableRecord {
    static var databaseDataEncodingStrategy: DatabaseDataEncodingStrategy { get }
    static var databaseDateEncodingStrategy: DatabaseDateEncodingStrategy { get }
    static var databaseUUIDEncodingStrategy: DatabaseUUIDEncodingStrategy { get }
}

See DatabaseDataDecodingStrategy, DatabaseDateDecodingStrategy, DatabaseDataEncodingStrategy, DatabaseDateEncodingStrategy, and DatabaseUUIDEncodingStrategy to learn about all available strategies.

There is no customization of uuid decoding, because UUID can already decode all its encoded variants (16-bytes blobs and uuid strings, both uppercase and lowercase).

Customized coding strategies apply:

  • When encoding and decoding database rows to and from records (fetching and persistence methods).
  • In requests by single-column primary key: fetchOne(_:id:), filter(id:), deleteAll(_:keys:), etc.

They do not apply in other requests based on data, date, or uuid values.

So make sure that those are properly encoded in your requests. For example:

struct Player: Codable, FetchableRecord, PersistableRecord, Identifiable {
    // UUIDs are stored as strings
    static let databaseUUIDEncodingStrategy = DatabaseUUIDEncodingStrategy.uppercaseString
    var id: UUID
    ...
}

try dbQueue.write { db in
    let uuid = UUID()
    let player = Player(id: uuid, ...)
    
    // OK: inserts a player in the database, with a string uuid
    try player.insert(db)
    
    // OK: performs a string-based query, finds the inserted player
    _ = try Player.filter(id: uuid).fetchOne(db)

    // NOT OK: performs a blob-based query, fails to find the inserted player
    _ = try Player.filter(Column("id") == uuid).fetchOne(db)
    
    // OK: performs a string-based query, finds the inserted player
    _ = try Player.filter(Column("id") == uuid.uuidString).fetchOne(db)
}

The userInfo Dictionary

Your [Codable Records] can be stored in the database, but they may also have other purposes. In this case, you may need to customize their implementations of Decodable.init(from:) and Encodable.encode(to:), depending on the context.

The standard way to provide such context is the userInfo dictionary. Implement those properties:

protocol FetchableRecord {
    static var databaseDecodingUserInfo: [CodingUserInfoKey: Any] { get }
}

protocol EncodableRecord {
    static var databaseEncodingUserInfo: [CodingUserInfoKey: Any] { get }
}

For example, here is a Player type that customizes its decoding:

// A key that holds a decoder's name
let decoderName = CodingUserInfoKey(rawValue: "decoderName")!

struct Player: FetchableRecord, Decodable {
    init(from decoder: Decoder) throws {
        // Print the decoder name
        let decoderName = decoder.userInfo[decoderName] as? String
        print("Decoded from \(decoderName ?? "unknown decoder")")
        ...
    }
}

You can have a specific decoding from JSON…

// prints "Decoded from JSON"
let decoder = JSONDecoder()
decoder.userInfo = [decoderName: "JSON"]
let player = try decoder.decode(Player.self, from: jsonData)

… and another one from database rows:

extension Player: FetchableRecord {
    static let databaseDecodingUserInfo: [CodingUserInfoKey: Any] = [decoderName: "database row"]
}

// prints "Decoded from database row"
let player = try Player.fetchOne(db, ...)

Note: make sure the databaseDecodingUserInfo and databaseEncodingUserInfo properties are explicitly declared as [CodingUserInfoKey: Any]. If they are not, the Swift compiler may silently miss the protocol requirement, resulting in sticky empty userInfo.

Tip: Derive Columns from Coding Keys

Codable types are granted with a CodingKeys enum. You can use them to safely define database columns:

struct Player: Codable {
    var id: Int64
    var name: String
    var score: Int
}

extension Player: FetchableRecord, PersistableRecord {
    enum Columns {
        static let id = Column(CodingKeys.id)
        static let name = Column(CodingKeys.name)
        static let score = Column(CodingKeys.score)
    }
}

See the query interface and Recommended Practices for Designing Record Types for further information.

Record Class

Record is a class that is designed to be subclassed. It inherits its features from the FetchableRecord, TableRecord, and PersistableRecord protocols. On top of that, Record instances can compare against previous versions of themselves in order to avoid useless updates.

Record subclasses define their custom database relationship by overriding database methods. For example:

class Place: Record {
    var id: Int64?
    var title: String
    var isFavorite: Bool
    var coordinate: CLLocationCoordinate2D
    
    init(id: Int64?, title: String, isFavorite: Bool, coordinate: CLLocationCoordinate2D) {
        self.id = id
        self.title = title
        self.isFavorite = isFavorite
        self.coordinate = coordinate
        super.init()
    }
    
    /// The table name
    override class var databaseTableName: String { "place" }
    
    /// The table columns
    enum Columns: String, ColumnExpression {
        case id, title, favorite, latitude, longitude
    }
    
    /// Creates a record from a database row
    required init(row: Row) throws {
        id = row[Columns.id]
        title = row[Columns.title]
        isFavorite = row[Columns.favorite]
        coordinate = CLLocationCoordinate2D(
            latitude: row[Columns.latitude],
            longitude: row[Columns.longitude])
        try super.init(row: row)
    }
    
    /// The values persisted in the database
    override func encode(to container: inout PersistenceContainer) throws {
        container[Columns.id] = id
        container[Columns.title] = title
        container[Columns.favorite] = isFavorite
        container[Columns.latitude] = coordinate.latitude
        container[Columns.longitude] = coordinate.longitude
    }
    
    /// Update record ID after a successful insertion
    override func didInsert(_ inserted: InsertionSuccess) {
        super.didInsert(inserted)
        id = inserted.rowID
    }
}

Record Comparison

Records that adopt the [EncodableRecord] protocol can compare against other records, or against previous versions of themselves.

This helps avoiding costly UPDATE statements when a record has not been edited.

The updateChanges Methods

The updateChanges methods perform a database update of the changed columns only (and does nothing if record has no change).

  • updateChanges(_:from:)

    This method lets you compare two records:

    if let oldPlayer = try Player.fetchOne(db, id: 42) {
        var newPlayer = oldPlayer
        newPlayer.score = 100
        if try newPlayer.updateChanges(db, from: oldPlayer) {
            print("player was modified, and updated in the database")
        } else {
            print("player was not modified, and database was not hit")
        }
    }
    
  • updateChanges(_:modify:)

    This method lets you update a record in place:

    if var player = try Player.fetchOne(db, id: 42) {
        let modified = try player.updateChanges(db) {
            $0.score = 100
        }
        if modified {
            print("player was modified, and updated in the database")
        } else {
            print("player was not modified, and database was not hit")
        }
    }
    
  • updateChanges(_:) (Record class only)

    Instances of the Record class are able to compare against themselves, and know if they have changes that have not been saved since the last fetch or saving:

    // Record class only
    if let player = try Player.fetchOne(db, id: 42) {
        player.score = 100
        if try player.updateChanges(db) {
            print("player was modified, and updated in the database")
        } else {
            print("player was not modified, and database was not hit")
        }
    }
    

The databaseEquals Method

This method returns whether two records have the same database representation:

let oldPlayer: Player = ...
var newPlayer: Player = ...
if newPlayer.databaseEquals(oldPlayer) == false {
    try newPlayer.save(db)
}

Note: The comparison is performed on the database representation of records. As long as your record type adopts the EncodableRecord protocol, you don’t need to care about Equatable.

The databaseChanges and hasDatabaseChanges Methods

databaseChanges(from:) returns a dictionary of differences between two records:

let oldPlayer = Player(id: 1, name: "Arthur", score: 100)
let newPlayer = Player(id: 1, name: "Arthur", score: 1000)
for (column, oldValue) in try newPlayer.databaseChanges(from: oldPlayer) {
    print("\(column) was \(oldValue)")
}
// prints "score was 100"

The Record class is able to compare against itself:

// Record class only
let player = Player(id: 1, name: "Arthur", score: 100)
try player.insert(db)
player.score = 1000
for (column, oldValue) in try player.databaseChanges {
    print("\(column) was \(oldValue)")
}
// prints "score was 100"

Record instances also have a hasDatabaseChanges property:

// Record class only
player.score = 1000
if player.hasDatabaseChanges {
    try player.save(db)
}

Record.hasDatabaseChanges is false after a Record instance has been fetched or saved into the database. Subsequent modifications may set it, or not: hasDatabaseChanges is based on value comparison. Setting a property to the same value does not set the changed flag:

let player = Player(name: "Barbara", score: 750)
player.hasDatabaseChanges  // true

try player.insert(db)
player.hasDatabaseChanges  // false

player.name = "Barbara"
player.hasDatabaseChanges  // false

player.score = 1000
player.hasDatabaseChanges  // true
try player.databaseChanges // ["score": 750]

For an efficient algorithm which synchronizes the content of a database table with a JSON payload, check groue/SortedDifference.

Record Customization Options

GRDB records come with many default behaviors, that are designed to fit most situations. Many of those defaults can be customized for your specific needs:

  • [Persistence Callbacks]: define what happens when you call a persistence method such as player.insert(db)
  • [Conflict Resolution]: Run INSERT OR REPLACE queries, and generally define what happens when a persistence method violates a unique index.
  • [Columns Selected by a Request]: define which columns are selected by requests such as Player.fetchAll(db).
  • [Beyond FetchableRecord]: the FetchableRecord protocol is not the end of the story.

[Codable Records] have a few extra options:

  • [JSON Columns]: control the format of JSON columns.
  • [Column Names Coding Strategies]: control how coding keys are turned into column names
  • [Date and UUID Coding Strategies]: control the format of Date and UUID properties in your Codable records.
  • [The userInfo Dictionary]: adapt your Codable implementation for the database.

Conflict Resolution

Insertions and updates can create conflicts: for example, a query may attempt to insert a duplicate row that violates a unique index.

Those conflicts normally end with an error. Yet SQLite let you alter the default behavior, and handle conflicts with specific policies. For example, the INSERT OR REPLACE statement handles conflicts with the “replace” policy which replaces the conflicting row instead of throwing an error.

The five different policies are: abort (the default), replace, rollback, fail, and ignore.

SQLite let you specify conflict policies at two different places:

  • In the definition of the database table:

    // CREATE TABLE player (
    //     id INTEGER PRIMARY KEY AUTOINCREMENT,
    //     email TEXT UNIQUE ON CONFLICT REPLACE
    // )
    try db.create(table: "player") { t in
        t.autoIncrementedPrimaryKey("id")
        t.column("email", .text).unique(onConflict: .replace) // <--
    }
    
    
    // Despite the unique index on email, both inserts succeed.
    // The second insert replaces the first row:
    try db.execute(sql: "INSERT INTO player (email) VALUES (?)", arguments: ["arthur@example.com"])
    try db.execute(sql: "INSERT INTO player (email) VALUES (?)", arguments: ["arthur@example.com"])
    
  • In each modification query:

    // CREATE TABLE player (
    //     id INTEGER PRIMARY KEY AUTOINCREMENT,
    //     email TEXT UNIQUE
    // )
    try db.create(table: "player") { t in
        t.autoIncrementedPrimaryKey("id")
        t.column("email", .text).unique()
    }
    
    
    // Again, despite the unique index on email, both inserts succeed.
    try db.execute(sql: "INSERT OR REPLACE INTO player (email) VALUES (?)", arguments: ["arthur@example.com"])
    try db.execute(sql: "INSERT OR REPLACE INTO player (email) VALUES (?)", arguments: ["arthur@example.com"])
    

When you want to handle conflicts at the query level, specify a custom persistenceConflictPolicy in your type that adopts the PersistableRecord protocol. It will alter the INSERT and UPDATE queries run by the insert, update and save [persistence methods]:

protocol MutablePersistableRecord {
    /// The policy that handles SQLite conflicts when records are
    /// inserted or updated.
    ///
    /// This property is optional: its default value uses the ABORT
    /// policy for both insertions and updates, so that GRDB generate
    /// regular INSERT and UPDATE queries.
    static var persistenceConflictPolicy: PersistenceConflictPolicy { get }
}

struct Player : MutablePersistableRecord {
    static let persistenceConflictPolicy = PersistenceConflictPolicy(
        insert: .replace,
        update: .replace)
}

// INSERT OR REPLACE INTO player (...) VALUES (...)
try player.insert(db)

Note: If you specify the ignore policy for inserts, the didInsert callback will be called with some random id in case of failed insert. You can detect failed insertions with insertAndFetch:

> // How to detect failed `INSERT OR IGNORE`:
> // INSERT OR IGNORE INTO player ... RETURNING *
> if let insertedPlayer = try player.insertAndFetch(db) {
>     // Succesful insertion
> } else {
>     // Ignored failure
> }
> ```
>
> **Note**: The `replace` policy may have to delete rows so that inserts and updates can succeed. Those deletions are not reported to [transaction observers](https://swiftpackageindex.com/groue/grdb.swift/documentation/grdb/transactionobserver) (this might change in a future release of SQLite).

### Beyond FetchableRecord

**Some GRDB users eventually discover that the [FetchableRecord] protocol does not fit all situations.** Use cases that are not well handled by FetchableRecord include:

- Your application needs polymorphic row decoding: it decodes some type or another, depending on the values contained in a database row.

- Your application needs to decode rows with a context: each decoded value should be initialized with some extra value that does not come from the database.

Since those use cases are not well handled by FetchableRecord, don't try to implement them on top of this protocol: you'll just fight the framework.


## Examples of Record Definitions

We will show below how to declare a record type for the following database table:

```swift
try dbQueue.write { db in
    try db.create(table: "place") { t in
        t.autoIncrementedPrimaryKey("id")
        t.column("title", .text).notNull()
        t.column("isFavorite", .boolean).notNull().defaults(to: false)
        t.column("longitude", .double).notNull()
        t.column("latitude", .double).notNull()
    }
}

Each one of the three examples below is correct. You will pick one or the other depending on your personal preferences and the requirements of your application:

Define a Codable struct, and adopt the record protocols you need This is the shortest way to define a record type. See the [Record Protocols Overview](#record-protocols-overview), and [Codable Records] for more information. ```swift struct Place: Codable { var id: Int64? var title: String var isFavorite: Bool private var latitude: CLLocationDegrees private var longitude: CLLocationDegrees var coordinate: CLLocationCoordinate2D { get { CLLocationCoordinate2D( latitude: latitude, longitude: longitude) } set { latitude = newValue.latitude longitude = newValue.longitude } } } // SQL generation extension Place: TableRecord { /// The table columns enum Columns { static let id = Column(CodingKeys.id) static let title = Column(CodingKeys.title) static let isFavorite = Column(CodingKeys.isFavorite) static let latitude = Column(CodingKeys.latitude) static let longitude = Column(CodingKeys.longitude) } } // Fetching methods extension Place: FetchableRecord { } // Persistence methods extension Place: MutablePersistableRecord { // Update auto-incremented id upon successful insertion mutating func didInsert(_ inserted: InsertionSuccess) { id = inserted.rowID } } ```
Define a plain struct, and adopt the record protocols you need See the [Record Protocols Overview](#record-protocols-overview) for more information. ```swift struct Place { var id: Int64? var title: String var isFavorite: Bool var coordinate: CLLocationCoordinate2D } // SQL generation extension Place: TableRecord { /// The table columns enum Columns: String, ColumnExpression { case id, title, isFavorite, latitude, longitude } } // Fetching methods extension Place: FetchableRecord { /// Creates a record from a database row init(row: Row) { id = row[Columns.id] title = row[Columns.title] isFavorite = row[Columns.isFavorite] coordinate = CLLocationCoordinate2D( latitude: row[Columns.latitude], longitude: row[Columns.longitude]) } } // Persistence methods extension Place: MutablePersistableRecord { /// The values persisted in the database func encode(to container: inout PersistenceContainer) { container[Columns.id] = id container[Columns.title] = title container[Columns.isFavorite] = isFavorite container[Columns.latitude] = coordinate.latitude container[Columns.longitude] = coordinate.longitude } // Update auto-incremented id upon successful insertion mutating func didInsert(_ inserted: InsertionSuccess) { id = inserted.rowID } } ```
Define a plain struct optimized for fetching performance This struct derives its persistence methods from the standard Encodable protocol (see [Codable Records]), but performs optimized row decoding by accessing database columns with numeric indexes. See the [Record Protocols Overview](#record-protocols-overview) for more information. ```swift struct Place: Encodable { var id: Int64? var title: String var isFavorite: Bool private var latitude: CLLocationDegrees private var longitude: CLLocationDegrees var coordinate: CLLocationCoordinate2D { get { CLLocationCoordinate2D( latitude: latitude, longitude: longitude) } set { latitude = newValue.latitude longitude = newValue.longitude } } } // SQL generation extension Place: TableRecord { /// The table columns enum Columns { static let id = Column(CodingKeys.id) static let title = Column(CodingKeys.title) static let isFavorite = Column(CodingKeys.isFavorite) static let latitude = Column(CodingKeys.latitude) static let longitude = Column(CodingKeys.longitude) } /// Arrange the selected columns and lock their order static let databaseSelection: [any SQLSelectable] = [ Columns.id, Columns.title, Columns.favorite, Columns.latitude, Columns.longitude] } // Fetching methods extension Place: FetchableRecord { /// Creates a record from a database row init(row: Row) { // For high performance, use numeric indexes that match the // order of Place.databaseSelection id = row[0] title = row[1] isFavorite = row[2] coordinate = CLLocationCoordinate2D( latitude: row[3], longitude: row[4]) } } // Persistence methods extension Place: MutablePersistableRecord { // Update auto-incremented id upon successful insertion mutating func didInsert(_ inserted: InsertionSuccess) { id = inserted.rowID } } ```
Subclass the Record class See the [Record class](#record-class) for more information. ```swift class Place: Record { var id: Int64? var title: String var isFavorite: Bool var coordinate: CLLocationCoordinate2D init(id: Int64?, title: String, isFavorite: Bool, coordinate: CLLocationCoordinate2D) { self.id = id self.title = title self.isFavorite = isFavorite self.coordinate = coordinate super.init() } /// The table name override class var databaseTableName: String { "place" } /// The table columns enum Columns: String, ColumnExpression { case id, title, isFavorite, latitude, longitude } /// Creates a record from a database row required init(row: Row) throws { id = row[Columns.id] title = row[Columns.title] isFavorite = row[Columns.isFavorite] coordinate = CLLocationCoordinate2D( latitude: row[Columns.latitude], longitude: row[Columns.longitude]) try super.init(row: row) } /// The values persisted in the database override func encode(to container: inout PersistenceContainer) throws { container[Columns.id] = id container[Columns.title] = title container[Columns.isFavorite] = isFavorite container[Columns.latitude] = coordinate.latitude container[Columns.longitude] = coordinate.longitude } // Update auto-incremented id upon successful insertion override func didInsert(_ inserted: InsertionSuccess) { super.didInsert(inserted) id = inserted.rowID } } ```

Requests

📖 QueryInterfaceRequest, Table

The query interface requests let you fetch values from the database:

let request = Player.filter(emailColumn != nil).order(nameColumn)
let players = try request.fetchAll(db)  // [Player]
let count = try request.fetchCount(db)  // Int

Query interface requests usually start from a type that adopts the TableRecord protocol, such as a Record subclass (see Records):

class Player: Record { ... }

// The request for all players:
let request = Player.all()
let players = try request.fetchAll(db) // [Player]

When you can not use a record type, use Table:

// The request for all rows from the player table:
let table = Table("player")
let request = table.all()
let rows = try request.fetchAll(db)    // [Row]

// The request for all players from the player table:
let table = Table<Player>("player")
let request = table.all()
let players = try request.fetchAll(db) // [Player]

Note: all examples in the documentation below use a record type, but you can always substitute a Table instead.

Next, declare the table columns that you want to use for filtering, or sorting:

let idColumn = Column("id")
let nameColumn = Column("name")

You can also declare column enums, if you prefer:

// Columns.id and Columns.name can be used just as
// idColumn and nameColumn declared above.
enum Columns: String, ColumnExpression {
    case id
    case name
}

You can now build requests with the following methods: all, none, select, distinct, filter, matching, group, having, order, reversed, limit, joining, including, with. All those methods return another request, which you can further refine by applying another method: Player.select(...).filter(...).order(...).

  • all(), none(): the requests for all rows, or no row.

    // SELECT * FROM player
    Player.all()
    

    By default, all columns are selected. See [Columns Selected by a Request].

  • select(...) and select(..., as:) define the selected columns. See [Columns Selected by a Request].

    // SELECT name FROM player
    Player.select(nameColumn, as: String.self)
    
  • annotated(with: expression...) extends the selection.

    // SELECT *, (score + bonus) AS total FROM player
    Player.annotated(with: (scoreColumn + bonusColumn).forKey("total"))
    
  • annotated(with: aggregate) extends the selection with association aggregates.

    // SELECT team.*, COUNT(DISTINCT player.id) AS playerCount
    // FROM team
    // LEFT JOIN player ON player.teamId = team.id
    // GROUP BY team.id
    Team.annotated(with: Team.players.count)
    
  • annotated(withRequired: association) and annotated(withOptional: association) extends the selection with [Associations].

    // SELECT player.*, team.color
    // FROM player
    // JOIN team ON team.id = player.teamId
    Player.annotated(withRequired: Player.team.select(colorColumn))
    
  • distinct() performs uniquing.

    // SELECT DISTINCT name FROM player
    Player.select(nameColumn, as: String.self).distinct()
    
  • filter(expression) applies conditions.

    // SELECT * FROM player WHERE id IN (1, 2, 3)
    Player.filter([1,2,3].contains(idColumn))
    
    
    // SELECT * FROM player WHERE (name IS NOT NULL) AND (height > 1.75)
    Player.filter(nameColumn != nil && heightColumn > 1.75)
    
  • filter(id:) and filter(ids:) are type-safe methods available on [Identifiable Records]:

    // SELECT * FROM player WHERE id = 1
    Player.filter(id: 1)
    
    
    // SELECT * FROM country WHERE isoCode IN ('FR', 'US')
    Country.filter(ids: ["FR", "US"])
    
  • filter(key:) and filter(keys:) apply conditions on primary and unique keys:

    // SELECT * FROM player WHERE id = 1
    Player.filter(key: 1)
    
    
    // SELECT * FROM country WHERE isoCode IN ('FR', 'US')
    Country.filter(keys: ["FR", "US"])
    
    
    // SELECT * FROM citizenship WHERE citizenId = 1 AND countryCode = 'FR'
    Citizenship.filter(key: ["citizenId": 1, "countryCode": "FR"])
    
    
    // SELECT * FROM player WHERE email = 'arthur@example.com'
    Player.filter(key: ["email": "arthur@example.com"])
    
  • matching(pattern) (FTS3, FTS5) performs full-text search.

    // SELECT * FROM document WHERE document MATCH 'sqlite database'
    let pattern = FTS3Pattern(matchingAllTokensIn: "SQLite database")
    Document.matching(pattern)
    

    When the pattern is nil, no row will match.

  • group(expression, ...) groups rows.

    // SELECT name, MAX(score) FROM player GROUP BY name
    Player
        .select(nameColumn, max(scoreColumn))
        .group(nameColumn)
    
  • having(expression) applies conditions on grouped rows.

    // SELECT team, MAX(score) FROM player GROUP BY team HAVING MIN(score) >= 1000
    Player
        .select(teamColumn, max(scoreColumn))
        .group(teamColumn)
        .having(min(scoreColumn) >= 1000)
    
  • having(aggregate) applies conditions on grouped rows, according to an association aggregate.

    // SELECT team.*
    // FROM team
    // LEFT JOIN player ON player.teamId = team.id
    // GROUP BY team.id
    // HAVING COUNT(DISTINCT player.id) >= 5
    Team.having(Team.players.count >= 5)
    
  • order(ordering, ...) sorts.

    // SELECT * FROM player ORDER BY name
    Player.order(nameColumn)
    
    
    // SELECT * FROM player ORDER BY score DESC, name
    Player.order(scoreColumn.desc, nameColumn)
    

    SQLite considers NULL values to be smaller than any other values for sorting purposes. Hence, NULLs naturally appear at the beginning of an ascending ordering and at the end of a descending ordering. With a [custom SQLite build], this can be changed using .ascNullsLast and .descNullsFirst:

    // SELECT * FROM player ORDER BY score ASC NULLS LAST
    Player.order(nameColumn.ascNullsLast)
    

    Each order call clears any previous ordering:

    // SELECT * FROM player ORDER BY name
    Player.order(scoreColumn).order(nameColumn)
    
  • reversed() reverses the eventual orderings.

    // SELECT * FROM player ORDER BY score ASC, name DESC
    Player.order(scoreColumn.desc, nameColumn).reversed()
    

    If no ordering was already specified, this method has no effect:

    // SELECT * FROM player
    Player.all().reversed()
    
  • limit(limit, offset: offset) limits and pages results.

    // SELECT * FROM player LIMIT 5
    Player.limit(5)
    
    
    // SELECT * FROM player LIMIT 5 OFFSET 10
    Player.limit(5, offset: 10)
    
  • joining(required:), joining(optional:), including(required:), including(optional:), and including(all:) fetch and join records through [Associations].

    // SELECT player.*, team.*
    // FROM player
    // JOIN team ON team.id = player.teamId
    Player.including(required: Player.team)
    
  • with(cte) embeds a [common table expression]:

    // WITH ... SELECT * FROM player
    let cte = CommonTableExpression(...)
    Player.with(cte)
    
  • Other requests that involve the primary key:

    • selectPrimaryKey(as:) selects the primary key.

      // SELECT id FROM player
      Player.selectPrimaryKey(as: Int64.self)    // QueryInterfaceRequest<Int64>
      
      
      // SELECT code FROM country
      Country.selectPrimaryKey(as: String.self)  // QueryInterfaceRequest<String>
      
      
      // SELECT citizenId, countryCode FROM citizenship
      Citizenship.selectPrimaryKey(as: Row.self) // QueryInterfaceRequest<Row>
      
    • orderByPrimaryKey() sorts by primary key.

      // SELECT * FROM player ORDER BY id
      Player.orderByPrimaryKey()
      
      
      // SELECT * FROM country ORDER BY code
      Country.orderByPrimaryKey()
      
      
      // SELECT * FROM citizenship ORDER BY citizenId, countryCode
      Citizenship.orderByPrimaryKey()
      
    • groupByPrimaryKey() groups rows by primary key.

You can refine requests by chaining those methods:

// SELECT * FROM player WHERE (email IS NOT NULL) ORDER BY name
Player.order(nameColumn).filter(emailColumn != nil)

The select, order, group, and limit methods ignore and replace previously applied selection, orderings, grouping, and limits. On the opposite, filter, matching, and having methods extend the query:

Player                          // SELECT * FROM player
    .filter(nameColumn != nil)  // WHERE (name IS NOT NULL)
    .filter(emailColumn != nil) //        AND (email IS NOT NULL)
    .order(nameColumn)          // - ignored -
    .reversed()                 // - ignored -
    .order(scoreColumn)         // ORDER BY score
    .limit(20, offset: 40)      // - ignored -
    .limit(10)                  // LIMIT 10

Raw SQL snippets are also accepted, with eventual arguments:

// SELECT DATE(creationDate), COUNT(*) FROM player WHERE name = 'Arthur' GROUP BY date(creationDate)
Player
    .select(sql: "DATE(creationDate), COUNT(*)")
    .filter(sql: "name = ?", arguments: ["Arthur"])
    .group(sql: "DATE(creationDate)")

Columns Selected by a Request

By default, query interface requests select all columns:

// SELECT * FROM player
struct Player: TableRecord { ... }
let request = Player.all()

// SELECT * FROM player
let table = Table("player")
let request = table.all()

The selection can be changed for each individual requests, or in the case of record-based requests, for all requests built from this record type.

The select(...) and select(..., as:) methods change the selection of a single request (see [Fetching from Requests] for detailed information):

let request = Player.select(max(Column("score")))
let maxScore = try Int.fetchOne(db, request) // Int?

let request = Player.select(max(Column("score")), as: Int.self)
let maxScore = try request.fetchOne(db)      // Int?

The default selection for a record type is controlled by the databaseSelection property:

struct RestrictedPlayer : TableRecord {
    static let databaseTableName = "player"
    static let databaseSelection: [any SQLSelectable] = [Column("id"), Column("name")]
}

struct ExtendedPlayer : TableRecord {
    static let databaseTableName = "player"
    static let databaseSelection: [any SQLSelectable] = [AllColumns(), Column.rowID]
}

// SELECT id, name FROM player
let request = RestrictedPlayer.all()

// SELECT *, rowid FROM player
let request = ExtendedPlayer.all()

Note: make sure the databaseSelection property is explicitly declared as [any SQLSelectable]. If it is not, the Swift compiler may silently miss the protocol requirement, resulting in sticky SELECT * requests. To verify your setup, see the How do I print a request as SQL? FAQ.

Expressions

Feed requests with SQL expressions built from your Swift code:

Fetching from Requests

Once you have a request, you can fetch the records at the origin of the request:

// Some request based on `Player`
let request = Player.filter(...)... // QueryInterfaceRequest<Player>

// Fetch players:
try request.fetchCursor(db) // A Cursor of Player
try request.fetchAll(db)    // [Player]
try request.fetchSet(db)    // Set<Player>
try request.fetchOne(db)    // Player?

For example:

let allPlayers = try Player.fetchAll(db)                            // [Player]
let arthur = try Player.filter(nameColumn == "Arthur").fetchOne(db) // Player?

See fetching methods for information about the fetchCursor, fetchAll, fetchSet and fetchOne methods.

You sometimes want to fetch other values.

The simplest way is to use the request as an argument to a fetching method of the desired type:

// Fetch an Int
let request = Player.select(max(scoreColumn))
let maxScore = try Int.fetchOne(db, request) // Int?

// Fetch a Row
let request = Player.select(min(scoreColumn), max(scoreColumn))
let row = try Row.fetchOne(db, request)!     // Row
let minScore = row[0] as Int?
let maxScore = row[1] as Int?

You can also change the request so that it knows the type it has to fetch:

  • With asRequest(of:), useful when you use [Associations]:

    struct BookInfo: FetchableRecord, Decodable {
        var book: Book
        var author: Author
    }
    
    
    // A request of BookInfo
    let request = Book
        .including(required: Book.author)
        .asRequest(of: BookInfo.self)
    
    
    let bookInfos = try dbQueue.read { db in
        try request.fetchAll(db) // [BookInfo]
    }
    
  • With select(..., as:), which is handy when you change the selection:

    // A request of Int
    let request = Player.select(max(scoreColumn), as: Int.self)
    
    
    let maxScore = try dbQueue.read { db in
        try request.fetchOne(db) // Int?
    }
    

Fetching by Key

Fetching records according to their primary key is a common task.

[Identifiable Records] can use the type-safe methods find(_:id:), fetchOne(_:id:), fetchAll(_:ids:) and fetchSet(_:ids:):

try Player.find(db, id: 1)                   // Player
try Player.fetchOne(db, id: 1)               // Player?
try Country.fetchAll(db, ids: ["FR", "US"])  // [Countries]

All record types can use find(_:key:), fetchOne(_:key:), fetchAll(_:keys:) and fetchSet(_:keys:) that apply conditions on primary and unique keys:

try Player.find(db, key: 1)                  // Player
try Player.fetchOne(db, key: 1)              // Player?
try Country.fetchAll(db, keys: ["FR", "US"]) // [Country]
try Player.fetchOne(db, key: ["email": "arthur@example.com"])            // Player?
try Citizenship.fetchOne(db, key: ["citizenId": 1, "countryCode": "FR"]) // Citizenship?

When the table has no explicit primary key, GRDB uses the hidden rowid column:

// SELECT * FROM document WHERE rowid = 1
try Document.fetchOne(db, key: 1)            // Document?

When you want to build a request and plan to fetch from it later, use a filter method:

let request = Player.filter(id: 1)
let request = Country.filter(ids: ["FR", "US"])
let request = Player.filter(key: ["email": "arthur@example.com"])
let request = Citizenship.filter(key: ["citizenId": 1, "countryCode": "FR"])

Testing for Record Existence

You can check if a request has matching rows in the database.

// Some request based on `Player`
let request = Player.filter(...)...

// Check for player existence:
let noSuchPlayer = try request.isEmpty(db) // Bool

You should check for emptiness instead of counting:

// Correct
let noSuchPlayer = try request.fetchCount(db) == 0
// Even better
let noSuchPlayer = try request.isEmpty(db)

You can also check if a given primary or unique key exists in the database.

[Identifiable Records] can use the type-safe method exists(_:id:):

try Player.exists(db, id: 1)
try Country.exists(db, id: "FR")

All record types can use exists(_:key:) that can check primary and unique keys:

try Player.exists(db, key: 1)
try Country.exists(db, key: "FR")
try Player.exists(db, key: ["email": "arthur@example.com"])
try Citizenship.exists(db, key: ["citizenId": 1, "countryCode": "FR"])

You should check for key existence instead of fetching a record and checking for nil:

// Correct
let playerExists = try Player.fetchOne(db, id: 1) != nil
// Even better
let playerExists = try Player.exists(db, id: 1)

Fetching Aggregated Values

Requests can count. The fetchCount() method returns the number of rows that would be returned by a fetch request:

// SELECT COUNT(*) FROM player
let count = try Player.fetchCount(db) // Int

// SELECT COUNT(*) FROM player WHERE email IS NOT NULL
let count = try Player.filter(emailColumn != nil).fetchCount(db)

// SELECT COUNT(DISTINCT name) FROM player
let count = try Player.select(nameColumn).distinct().fetchCount(db)

// SELECT COUNT(*) FROM (SELECT DISTINCT name, score FROM player)
let count = try Player.select(nameColumn, scoreColumn).distinct().fetchCount(db)

Other aggregated values can also be selected and fetched (see SQL Functions):

let request = Player.select(max(scoreColumn))
let maxScore = try Int.fetchOne(db, request) // Int?

let request = Player.select(min(scoreColumn), max(scoreColumn))
let row = try Row.fetchOne(db, request)!     // Row
let minScore = row[0] as Int?
let maxScore = row[1] as Int?

Delete Requests

Requests can delete records, with the deleteAll() method:

// DELETE FROM player
try Player.deleteAll(db)

// DELETE FROM player WHERE team = 'red'
try Player
    .filter(teamColumn == "red")
    .deleteAll(db)

// DELETE FROM player ORDER BY score LIMIT 10
try Player
    .order(scoreColumn)
    .limit(10)
    .deleteAll(db)

Note Deletion methods are available on types that adopt the [TableRecord] protocol, and Table:

> struct Player: TableRecord { ... }
> try Player.deleteAll(db)          // Fine
> try Table("player").deleteAll(db) // Just as fine
> ```

**Deleting records according to their primary key** is a common task.

[Identifiable Records] can use the type-safe methods `deleteOne(_:id:)` and `deleteAll(_:ids:)`:

```swift
try Player.deleteOne(db, id: 1)
try Country.deleteAll(db, ids: ["FR", "US"])

All record types can use deleteOne(_:key:) and deleteAll(_:keys:) that apply conditions on primary and unique keys:

try Player.deleteOne(db, key: 1)
try Country.deleteAll(db, keys: ["FR", "US"])
try Player.deleteOne(db, key: ["email": "arthur@example.com"])
try Citizenship.deleteOne(db, key: ["citizenId": 1, "countryCode": "FR"])

When the table has no explicit primary key, GRDB uses the hidden rowid column:

// DELETE FROM document WHERE rowid = 1
try Document.deleteOne(db, id: 1)             // Document?

Update Requests

Requests can batch update records. The updateAll() method accepts column assignments defined with the set(to:) method:

// UPDATE player SET score = 0, isHealthy = 1, bonus = NULL
try Player.updateAll(db, 
    Column("score").set(to: 0), 
    Column("isHealthy").set(to: true), 
    Column("bonus").set(to: nil))

// UPDATE player SET score = 0 WHERE team = 'red'
try Player
    .filter(Column("team") == "red")
    .updateAll(db, Column("score").set(to: 0))

// UPDATE player SET top = 1 ORDER BY score DESC LIMIT 10
try Player
    .order(Column("score").desc)
    .limit(10)
    .updateAll(db, Column("top").set(to: true))

// UPDATE country SET population = 67848156 WHERE id = 'FR'
try Country
    .filter(id: "FR")
    .updateAll(db, Column("population").set(to: 67_848_156))

Column assignments accept any expression:

// UPDATE player SET score = score + (bonus * 2)
try Player.updateAll(db, Column("score").set(to: Column("score") + Column("bonus") * 2))

As a convenience, you can also use the +=, -=, *=, or /= operators:

// UPDATE player SET score = score + (bonus * 2)
try Player.updateAll(db, Column("score") += Column("bonus") * 2)

Default [Conflict Resolution] rules apply, and you may also provide a specific one:

// UPDATE OR IGNORE player SET ...
try Player.updateAll(db, onConflict: .ignore, /* assignments... */)

Note The updateAll method is available on types that adopt the [TableRecord] protocol, and Table:

> struct Player: TableRecord { ... }
> try Player.updateAll(db, ...)          // Fine
> try Table("player").updateAll(db, ...) // Just as fine
> ```


# GRDB with SQLCipher 4
pod 'GRDB.swift/SQLCipher'
pod 'SQLCipher', '~> 4.0'

# GRDB with SQLCipher 3
pod 'GRDB.swift/SQLCipher'
pod 'SQLCipher', '~> 3.4'

Make sure you remove any existing pod 'GRDB.swift' from your Podfile. GRDB.swift/SQLCipher must be the only active GRDB pod in your whole project, or you will face linker or runtime errors, due to the conflicts between SQLCipher and the system SQLite.

Creating or Opening an Encrypted Database

You create and open an encrypted database by providing a passphrase to your [database connection]:

var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase("secret")
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

It is also in prepareDatabase that you perform other SQLCipher configuration steps that must happen early in the lifetime of a SQLCipher connection. For example:

var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase("secret")
    try db.execute(sql: "PRAGMA cipher_page_size = ...")
    try db.execute(sql: "PRAGMA kdf_iter = ...")
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

When you want to open an existing SQLCipher 3 database with SQLCipher 4, you may want to run the cipher_compatibility pragma:

// Open an SQLCipher 3 database with SQLCipher 4
var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase("secret")
    try db.execute(sql: "PRAGMA cipher_compatibility = 3")
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

See SQLCipher 4.0.0 Release and Upgrading to SQLCipher 4 for more information.

Changing the Passphrase of an Encrypted Database

You can change the passphrase of an already encrypted database.

When you use a database queue, open the database with the old passphrase, and then apply the new passphrase:

try dbQueue.write { db in
    try db.changePassphrase("newSecret")
}

When you use a database pool, make sure that no concurrent read can happen by changing the passphrase within the barrierWriteWithoutTransaction block. You must also ensure all future reads open a new database connection by calling the invalidateReadOnlyConnections method:

try dbPool.barrierWriteWithoutTransaction { db in
    try db.changePassphrase("newSecret")
    dbPool.invalidateReadOnlyConnections()
}

Note: When an application wants to keep on using a database queue or pool after the passphrase has changed, it is responsible for providing the correct passphrase to the usePassphrase method called in the database preparation function. Consider:

> // WRONG: this won't work across a passphrase change
> let passphrase = try getPassphrase()
> var config = Configuration()
> config.prepareDatabase { db in
>     try db.usePassphrase(passphrase)
> }
>
> // CORRECT: get the latest passphrase when it is needed
> var config = Configuration()
> config.prepareDatabase { db in
>     let passphrase = try getPassphrase()
>     try db.usePassphrase(passphrase)
> }
> ```

> **Note**: The `DatabasePool.barrierWriteWithoutTransaction` method does not prevent [database snapshots](https://swiftpackageindex.com/groue/grdb.swift/documentation/grdb/databasesnapshot) from accessing the database during the passphrase change, or after the new passphrase has been applied to the database. Those database accesses may throw errors. Applications should provide their own mechanism for invalidating open snapshots before the passphrase is changed.

> **Note**: Instead of changing the passphrase "in place" as described here, you can also export the database in a new encrypted database that uses the new passphrase. See [Exporting a Database to an Encrypted Database](#exporting-a-database-to-an-encrypted-database).


### Exporting a Database to an Encrypted Database

Providing a passphrase won't encrypt a clear-text database that already exists, though. SQLCipher can't do that, and you will get an error instead: `SQLite error 26: file is encrypted or is not a database`.

Instead, create a new encrypted database, at a distinct location, and export the content of the existing database. This can both encrypt a clear-text database, or change the passphrase of an encrypted database.

The technique to do that is [documented](https://discuss.zetetic.net/t/how-to-encrypt-a-plaintext-sqlite-database-to-use-sqlcipher-and-avoid-file-is-encrypted-or-is-not-a-database-errors/868/1) by SQLCipher.

With GRDB, it gives:

```swift
// The existing database
let existingDBQueue = try DatabaseQueue(path: "/path/to/existing.db")

// The new encrypted database, at some distinct location:
var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase("secret")
}
let newDBQueue = try DatabaseQueue(path: "/path/to/new.db", configuration: config)

try existingDBQueue.inDatabase { db in
    try db.execute(
        sql: """
            ATTACH DATABASE ? AS encrypted KEY ?;
            SELECT sqlcipher_export('encrypted');
            DETACH DATABASE encrypted;
            """,
        arguments: [newDBQueue.path, "secret"])
}

// Now the export is completed, and the existing database can be deleted.

Security Considerations

Managing the lifetime of the passphrase string

It is recommended to avoid keeping the passphrase in memory longer than necessary. To do this, make sure you load the passphrase from the prepareDatabase method:

// NOT RECOMMENDED: this keeps the passphrase in memory longer than necessary
let passphrase = try getPassphrase()
var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase(passphrase)
}

// RECOMMENDED: only load the passphrase when it is needed
var config = Configuration()
config.prepareDatabase { db in
    let passphrase = try getPassphrase()
    try db.usePassphrase(passphrase)
}

This technique helps manages the lifetime of the passphrase, although keep in mind that the content of a String may remain intact in memory long after the object has been released.

For even better control over the lifetime of the passphrase in memory, use a Data object which natively provides the resetBytes function.

// RECOMMENDED: only load the passphrase when it is needed and reset its content immediately after use
var config = Configuration()
config.prepareDatabase { db in
    var passphraseData = try getPassphraseData() // Data
    defer {
        passphraseData.resetBytes(in: 0..<passphraseData.count)
    }
    try db.usePassphrase(passphraseData)
}

Some demanding users will want to go further, and manage the lifetime of the raw passphrase bytes. See below.

Managing the lifetime of the passphrase bytes

GRDB offers convenience methods for providing the database passphrases as Swift strings: usePassphrase(_:) and changePassphrase(_:). Those methods don’t keep the passphrase String in memory longer than necessary. But they are as secure as the standard String type: the lifetime of actual passphrase bytes in memory is not under control.

When you want to precisely manage the passphrase bytes, talk directly to SQLCipher, using its raw C functions.

For example:

var config = Configuration()
config.prepareDatabase { db in
    ... // Carefully load passphrase bytes
    let code = sqlite3_key(db.sqliteConnection, /* passphrase bytes */)
    ... // Carefully dispose passphrase bytes
    guard code == SQLITE_OK else {
        throw DatabaseError(
            resultCode: ResultCode(rawValue: code), 
            message: db.lastErrorMessage)
    }
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

Passphrase availability vs. Database availability

When the passphrase is securely stored in the system keychain, your application can protect it using the kSecAttrAccessible attribute.

Such protection prevents GRDB from creating SQLite connections when the passphrase is not available:

var config = Configuration()
config.prepareDatabase { db in
    let passphrase = try loadPassphraseFromSystemKeychain()
    try db.usePassphrase(passphrase)
}

// Success if and only if the passphrase is available
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

For the same reason, [database pools], which open SQLite connections on demand, may fail at any time as soon as the passphrase becomes unavailable:

// Success if and only if the passphrase is available
let dbPool = try DatabasePool(path: dbPath, configuration: config)

// May fail if passphrase has turned unavailable
try dbPool.read { ... }

// May trigger value observation failure if passphrase has turned unavailable
try dbPool.write { ... }

Because DatabasePool maintains a pool of long-lived SQLite connections, some database accesses will use an existing connection, and succeed. And some other database accesses will fail, as soon as the pool wants to open a new connection. It is impossible to predict which accesses will succeed or fail.

For the same reason, a database queue, which also maintains a long-lived SQLite connection, will remain available even after the passphrase has turned unavailable.

Applications are thus responsible for protecting database accesses when the passphrase is unavailable. To this end, they can use Data Protection. They can also destroy their instances of database queue or pool when the passphrase becomes unavailable.

Backup

You can backup (copy) a database into another.

Backups can for example help you copying an in-memory database to and from a database file when you implement NSDocument subclasses.

let source: DatabaseQueue = ...      // or DatabasePool
let destination: DatabaseQueue = ... // or DatabasePool
try source.backup(to: destination)

The backup method blocks the current thread until the destination database contains the same contents as the source database.

When the source is a database pool, concurrent writes can happen during the backup. Those writes may, or may not, be reflected in the backup, but they won’t trigger any error.

Database has an analogous backup method.

let source: DatabaseQueue = ...      // or DatabasePool
let destination: DatabaseQueue = ... // or DatabasePool
try source.write { sourceDb in
    try destination.barrierWriteWithoutTransaction { destDb in
        try sourceDb.backup(to: destDb)
    }
}

This method allows for the choice of source and destination Database handles with which to backup the database.

Backup Progress Reporting

The backup methods take optional pagesPerStep and progress parameters. Together these parameters can be used to track a database backup in progress and abort an incomplete backup.

When pagesPerStep is provided, the database backup is performed in steps. At each step, no more than pagesPerStep database pages are copied from the source to the destination. The backup proceeds one step at a time until all pages have been copied.

When a progress callback is provided, progress is called after every backup step, including the last. Even if a non-default pagesPerStep is specified or the backup is otherwise completed in a single step, the progress callback will be called.

try source.backup(
    to: destination,
    pagesPerStep: ...)
    { backupProgress in
       print("Database backup progress:", backupProgress)
    }

Aborting an Incomplete Backup

If a call to progress throws when backupProgress.isComplete == false, the backup will be aborted and the error rethrown. However, if a call to progress throws when backupProgress.isComplete == true, the backup is unaffected and the error is silently ignored.

Warning: Passing non-default values of pagesPerStep or progress to the backup methods is an advanced API intended to provide additional capabilities to expert users. GRDB’s backup API provides a faithful, low-level wrapper to the underlying SQLite online backup API. GRDB’s documentation is not a comprehensive substitute for the official SQLite documentation of their backup API.

Interrupt a Database

The interrupt() method causes any pending database operation to abort and return at its earliest opportunity.

It can be called from any thread.

dbQueue.interrupt()
dbPool.interrupt()

A call to interrupt() that occurs when there are no running SQL statements is a no-op and has no effect on SQL statements that are started after interrupt() returns.

A database operation that is interrupted will throw a DatabaseError with code SQLITE_INTERRUPT. If the interrupted SQL operation is an INSERT, UPDATE, or DELETE that is inside an explicit transaction, then the entire transaction will be rolled back automatically. If the rolled back transaction was started by a transaction-wrapping method such as DatabaseWriter.write or Database.inTransaction, then all database accesses will throw a DatabaseError with code SQLITE_ABORT until the wrapping method returns.

For example:

try dbQueue.write { db in
    try Player(...).insert(db)     // throws SQLITE_INTERRUPT
    try Player(...).insert(db)     // not executed
}                                  // throws SQLITE_INTERRUPT

try dbQueue.write { db in
    do {
        try Player(...).insert(db) // throws SQLITE_INTERRUPT
    } catch { }
}                                  // throws SQLITE_ABORT

try dbQueue.write { db in
    do {
        try Player(...).insert(db) // throws SQLITE_INTERRUPT
    } catch { }
    try Player(...).insert(db)     // throws SQLITE_ABORT
}                                  // throws SQLITE_ABORT

You can catch both SQLITE_INTERRUPT and SQLITE_ABORT errors:

do {
    try dbPool.write { db in ... }
} catch DatabaseError.SQLITE_INTERRUPT, DatabaseError.SQLITE_ABORT {
    // Oops, the database was interrupted.
}

For more information, see Interrupt A Long-Running Query.

Avoiding SQL Injection

SQL injection is a technique that lets an attacker nuke your database.

XKCD: Exploits of a Mom

https://xkcd.com/327/

Here is an example of code that is vulnerable to SQL injection:

// BAD BAD BAD
let id = 1
let name = textField.text
try dbQueue.write { db in
    try db.execute(sql: "UPDATE students SET name = '\(name)' WHERE id = \(id)")
}

If the user enters a funny string like Robert'; DROP TABLE students; --, SQLite will see the following SQL, and drop your database table instead of updating a name as intended:

UPDATE students SET name = 'Robert';
DROP TABLE students;
--' WHERE id = 1

To avoid those problems, never embed raw values in your SQL queries. The only correct technique is to provide arguments to your raw SQL queries:

let name = textField.text
try dbQueue.write { db in
    // Good
    try db.execute(
        sql: "UPDATE students SET name = ? WHERE id = ?",
        arguments: [name, id])
    
    // Just as good
    try db.execute(
        sql: "UPDATE students SET name = :name WHERE id = :id",
        arguments: ["name": name, "id": id])
}

When you use records and the query interface, GRDB always prevents SQL injection for you:

let id = 1
let name = textField.text
try dbQueue.write { db in
    if var student = try Student.fetchOne(db, id: id) {
        student.name = name
        try student.update(db)
    }
}

Error Handling

GRDB can throw DatabaseError, [RecordError], or crash your program with a fatal error.

Considering that a local database is not some JSON loaded from a remote server, GRDB focuses on trusted databases. Dealing with untrusted databases requires extra care.

DatabaseError

📖 DatabaseError

DatabaseError are thrown on SQLite errors:

do {
    try Pet(masterId: 1, name: "Bobby").insert(db)
} catch let error as DatabaseError {
    // The SQLite error code: 19 (SQLITE_CONSTRAINT)
    error.resultCode
    
    // The extended error code: 787 (SQLITE_CONSTRAINT_FOREIGNKEY)
    error.extendedResultCode
    
    // The eventual SQLite message: FOREIGN KEY constraint failed
    error.message
    
    // The eventual erroneous SQL query
    // "INSERT INTO pet (masterId, name) VALUES (?, ?)"
    error.sql
    
    // The eventual SQL arguments
    // [1, "Bobby"]
    error.arguments
    
    // Full error description
    // > SQLite error 19: FOREIGN KEY constraint failed -
    // > while executing `INSERT INTO pet (masterId, name) VALUES (?, ?)`
    error.description
}

If you want to see statement arguments in the error description, make statement arguments public.

SQLite uses results codes to distinguish between various errors.

You can catch a DatabaseError and match on result codes:

do {
    try ...
} catch let error as DatabaseError {
    switch error {
    case DatabaseError.SQLITE_CONSTRAINT_FOREIGNKEY:
        // foreign key constraint error
    case DatabaseError.SQLITE_CONSTRAINT:
        // any other constraint error
    default:
        // any other database error
    }
}

You can also directly match errors on result codes:

do {
    try ...
} catch DatabaseError.SQLITE_CONSTRAINT_FOREIGNKEY {
    // foreign key constraint error
} catch DatabaseError.SQLITE_CONSTRAINT {
    // any other constraint error
} catch {
    // any other database error
}

Each DatabaseError has two codes: an extendedResultCode (see extended result code), and a less precise resultCode (see primary result code). Extended result codes are refinements of primary result codes, as SQLITE_CONSTRAINT_FOREIGNKEY is to SQLITE_CONSTRAINT, for example.

Warning: SQLite has progressively introduced extended result codes across its versions. The SQLite release notes are unfortunately not quite clear about that: write your handling of extended result codes with care.

RecordError

📖 RecordError

RecordError is thrown by the [PersistableRecord] protocol when the update method could not find any row to update:

do {
    try player.update(db)
} catch let RecordError.recordNotFound(databaseTableName: table, key: key) {
    print("Key \(key) was not found in table \(table).")
}

RecordError is also thrown by the [FetchableRecord] protocol when the find method does not find any record:

do {
    let player = try Player.find(db, id: 42)
} catch let RecordError.recordNotFound(databaseTableName: table, key: key) {
    print("Key \(key) was not found in table \(table).")
}

Fatal Errors

Fatal errors notify that the program, or the database, has to be changed.

They uncover programmer errors, false assumptions, and prevent misuses. Here are a few examples:

  • The code asks for a non-optional value, when the database contains NULL:

    // fatal error: could not convert NULL to String.
    let name: String = row["name"]
    

    Solution: fix the contents of the database, use NOT NULL constraints, or load an optional:

    let name: String? = row["name"]
    
  • Conversion from database value to Swift type fails:

    // fatal error: could not convert "Mom’s birthday" to Date.
    let date: Date = row["date"]
    
    
    // fatal error: could not convert "" to URL.
    let url: URL = row["url"]
    

    Solution: fix the contents of the database, or use DatabaseValue to handle all possible cases:

    let dbValue: DatabaseValue = row["date"]
    if dbValue.isNull {
        // Handle NULL
    } else if let date = Date.fromDatabaseValue(dbValue) {
        // Handle valid date
    } else {
        // Handle invalid date
    }
    
  • The database can’t guarantee that the code does what it says:

    // fatal error: table player has no unique index on column email
    try Player.deleteOne(db, key: ["email": "arthur@example.com"])
    

    Solution: add a unique index to the player.email column, or use the deleteAll method to make it clear that you may delete more than one row:

    try Player.filter(Column("email") == "arthur@example.com").deleteAll(db)
    
  • Database connections are not reentrant:

    // fatal error: Database methods are not reentrant.
    dbQueue.write { db in
        dbQueue.write { db in
            ...
        }
    }
    

    Solution: avoid reentrancy, and instead pass a database connection along.

How to Deal with Untrusted Inputs

Let’s consider the code below:

let sql = "SELECT ..."

// Some untrusted arguments for the query
let arguments: [String: Any] = ...
let rows = try Row.fetchCursor(db, sql: sql, arguments: StatementArguments(arguments))

while let row = try rows.next() {
    // Some untrusted database value:
    let date: Date? = row[0]
}

It has two opportunities to throw fatal errors:

  • Untrusted arguments: The dictionary may contain values that do not conform to the DatabaseValueConvertible protocol, or may miss keys required by the statement.
  • Untrusted database content: The row may contain a non-null value that can’t be turned into a date.

In such a situation, you can still avoid fatal errors by exposing and handling each failure point, one level down in the GRDB API:

// Untrusted arguments
if let arguments = StatementArguments(arguments) {
    let statement = try db.makeStatement(sql: sql)
    try statement.setArguments(arguments)
    
    var cursor = try Row.fetchCursor(statement)
    while let row = try iterator.next() {
        // Untrusted database content
        let dbValue: DatabaseValue = row[0]
        if dbValue.isNull {
            // Handle NULL
        if let date = Date.fromDatabaseValue(dbValue) {
            // Handle valid date
        } else {
            // Handle invalid date
        }
    }
}

See [Statement] and DatabaseValue for more information.

Error Log

SQLite can be configured to invoke a callback function containing an error code and a terse error message whenever anomalies occur.

This global error callback must be configured early in the lifetime of your application:

Database.logError = { (resultCode, message) in
    NSLog("%@", "SQLite error \(resultCode): \(message)")
}

Warning: Database.logError must be set before any database connection is opened. This includes the connections that your application opens with GRDB, but also connections opened by other tools, such as third-party libraries. Setting it after a connection has been opened is an SQLite misuse, and has no effect.

See The Error And Warning Log for more information.

Unicode

SQLite lets you store unicode strings in the database.

However, SQLite does not provide any unicode-aware string transformations or comparisons.

Unicode functions

The UPPER and LOWER built-in SQLite functions are not unicode-aware:

// "JéRôME"
try String.fetchOne(db, sql: "SELECT UPPER('Jérôme')")

GRDB extends SQLite with SQL functions that call the Swift built-in string functions capitalized, lowercased, uppercased, localizedCapitalized, localizedLowercased and localizedUppercased:

// "JÉRÔME"
let uppercased = DatabaseFunction.uppercase
try String.fetchOne(db, sql: "SELECT \(uppercased.name)('Jérôme')")

Those unicode-aware string functions are also readily available in the query interface:

Player.select(nameColumn.uppercased)

Memory Management

Both SQLite and GRDB use non-essential memory that help them perform better.

You can reclaim this memory with the releaseMemory method:

// Release as much memory as possible.
dbQueue.releaseMemory()
dbPool.releaseMemory()

This method blocks the current thread until all current database accesses are completed, and the memory collected.

Warning: If DatabasePool.releaseMemory() is called while a long read is performed concurrently, then no other read access will be possible until this long read has completed, and the memory has been released. If this does not suit your application needs, look for the asynchronous options below:

You can release memory in an asynchronous way as well:

// On a DatabaseQueue
dbQueue.asyncWriteWithoutTransaction { db in
    db.releaseMemory()
}

// On a DatabasePool
dbPool.releaseMemoryEventually()

DatabasePool.releaseMemoryEventually() does not block the current thread, and does not prevent concurrent database accesses. In exchange for this convenience, you don’t know when memory has been freed.

FAQ: Opening Connections

How do I open a database stored as a resource of my application?

Open a read-only connection to your resource:

// HOW TO open a read-only connection to a database resource

// Get the path to the database resource.
if let dbPath = Bundle.main.path(forResource: "db", ofType: "sqlite")

if let dbPath {
    // If the resource exists, open a read-only connection.
    // Writes are disallowed because resources can not be modified. 
    var config = Configuration()
    config.readonly = true
    let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)
} else {
    // The database resource can not be found.
    // Fix your setup, or report the problem to the user. 
}

How do I close a database connection?

Database connections are automatically closed when DatabaseQueue or DatabasePool instances are deinitialized.

If the correct execution of your program depends on precise database closing, perform an explicit call to close(). This method may fail and create zombie connections, so please check its detailed documentation.

FAQ: SQL

How do I print a request as SQL?

When you want to debug a request that does not deliver the expected results, you may want to print the SQL that is actually executed.

You can compile the request into a prepared [Statement]:

try dbQueue.read { db in
    let request = Player.filter(Column("email") == "arthur@example.com")
    let statement = try request.makePreparedRequest(db).statement
    print(statement) // SELECT * FROM player WHERE email = ?
    print(statement.arguments) // ["arthur@example.com"]
}

Another option is to setup a tracing function that prints out the executed SQL requests. For example, provide a tracing function when you connect to the database:

// Prints all SQL statements
var config = Configuration()
config.prepareDatabase { db in
    db.trace { print($0) }
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

try dbQueue.read { db in
    // Prints "SELECT * FROM player WHERE email = ?"
    let players = try Player.filter(Column("email") == "arthur@example.com").fetchAll(db)
}

If you want to see statement arguments such as 'arthur@example.com' in the logged statements, make statement arguments public.

Note: the generated SQL may change between GRDB releases, without notice: don’t have your application rely on any specific SQL output.

How do I monitor the duration of database statements execution?

Use the trace(options:_:) method, with the .profile option:

var config = Configuration()
config.prepareDatabase { db in
    db.trace(options: .profile) { event in
        // Prints all SQL statements with their duration
        print(event)
        
        // Access to detailed profiling information
        if case let .profile(statement, duration) = event, duration > 0.5 {
            print("Slow query: \(statement.sql)")
        }
    }
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

try dbQueue.read { db in
    let players = try Player.filter(Column("email") == "arthur@example.com").fetchAll(db)
    // Prints "0.003s SELECT * FROM player WHERE email = ?"
}

If you want to see statement arguments such as 'arthur@example.com' in the logged statements, make statement arguments public.

What Are Experimental Features?

Since GRDB 1.0, all backwards compatibility guarantees of semantic versioning apply: no breaking change will happen until the next major version of the library.

There is an exception, though: experimental features, marked with the “:fire: EXPERIMENTAL” badge. Those are advanced features that are too young, or lack user feedback. They are not stabilized yet.

Those experimental features are not protected by semantic versioning, and may break between two minor releases of the library. To help them becoming stable, your feedback is greatly appreciated.

FAQ: Associations

How do I filter records and only keep those that are associated to another record?

Let’s say you have two record types, Book and Author, and you want to only fetch books that have an author, and discard anonymous books.

We start by defining the association between books and authors:

struct Book: TableRecord {
    ...
    static let author = belongsTo(Author.self)
}

struct Author: TableRecord {
    ...
}

And then we can write our request and only fetch books that have an author, discarding anonymous ones:

let books: [Book] = try dbQueue.read { db in
    // SELECT book.* FROM book 
    // JOIN author ON author.id = book.authorID
    let request = Book.joining(required: Book.author)
    return try request.fetchAll(db)
}

Note how this request does not use the filter method. Indeed, we don’t have any condition to express on any column. Instead, we just need to “require that a book can be joined to its author”.

See How do I filter records and only keep those that are NOT associated to another record? below for the opposite question.

How do I filter records and only keep those that are NOT associated to another record?

Let’s say you have two record types, Book and Author, and you want to only fetch anonymous books that do not have any author.

We start by defining the association between books and authors:

struct Book: TableRecord {
    ...
    static let author = belongsTo(Author.self)
}

struct Author: TableRecord {
    ...
}

And then we can write our request and only fetch anonymous books that don’t have any author:

let books: [Book] = try dbQueue.read { db in
    // SELECT book.* FROM book
    // LEFT JOIN author ON author.id = book.authorID
    // WHERE author.id IS NULL
    let authorAlias = TableAlias()
    let request = Book
        .joining(optional: Book.author.aliased(authorAlias))
        .filter(!authorAlias.exists)
    return try request.fetchAll(db)
}

This request uses a TableAlias in order to be able to filter on the eventual associated author. We make sure that the Author.primaryKey is nil, which is another way to say it does not exist: the book has no author.

See How do I filter records and only keep those that are associated to another record? above for the opposite question.

How do I select only one column of an associated record?

Let’s say you have two record types, Book and Author, and you want to fetch all books with their author name, but not the full associated author records.

We start by defining the association between books and authors:

struct Book: Decodable, TableRecord {
    ...
    static let author = belongsTo(Author.self)
}

struct Author: Decodable, TableRecord {
    ...
    enum Columns {
        static let name = Column(CodingKeys.name)
    }
}

And then we can write our request and the ad-hoc record that decodes it:

struct BookInfo: Decodable, FetchableRecord {
    var book: Book
    var authorName: String? // nil when the book is anonymous
    
    static func all() -> QueryInterfaceRequest<BookInfo> {
        // SELECT book.*, author.name AS authorName
        // FROM book
        // LEFT JOIN author ON author.id = book.authorID
        let authorName = Author.Columns.name.forKey(CodingKeys.authorName)
        return Book
            .annotated(withOptional: Book.author.select(authorName))
            .asRequest(of: BookInfo.self)
    }
}

let bookInfos: [BookInfo] = try dbQueue.read { db in
    BookInfo.all().fetchAll(db)
}

By defining the request as a static method of BookInfo, you have access to the private CodingKeys.authorName, and a compiler-checked SQL column name.

By using the annotated(withOptional:) method, you append the author name to the top-level selection that can be decoded by the ad-hoc record.

By using asRequest(of:), you enhance the type-safety of your request.

FAQ: ValueObservation

Why is ValueObservation not publishing value changes?

Sometimes it looks that a [ValueObservation] does not notify the changes you expect.

There may be four possible reasons for this:

  1. The expected changes were not committed into the database.
  2. The expected changes were committed into the database, but were quickly overwritten.
  3. The observation was stopped.
  4. The observation does not track the expected database region.

To answer the first two questions, look at SQL statements executed by the database. This is done when you open the database connection:

// Prints all SQL statements
var config = Configuration()
config.prepareDatabase { db in
    db.trace { print("SQL: \($0)") }
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

If, after that, you are convinced that the expected changes were committed into the database, and not overwritten soon after, trace observation events:

let observation = ValueObservation
    .tracking { db in ... }
    .print() // <- trace observation events
let cancellable = observation.start(...)

Look at the observation logs which start with cancel or failure: maybe the observation was cancelled by your app, or did fail with an error.

Look at the observation logs which start with value: make sure, again, that the expected value was not actually notified, then overwritten.

Finally, look at the observation logs which start with tracked region. Does the printed database region cover the expected changes?

For example:

  • empty: The empty region, which tracks nothing and never triggers the observation.
  • player(*): The full player table
  • player(id,name): The id and name columns of the player table
  • player(id,name)[1]: The id and name columns of the row with id 1 in the player table
  • player(*),team(*): Both the full player and team tables

If you happen to use the ValueObservation.trackingConstantRegion(_:) method and see a mismatch between the tracked region and your expectation, then change the definition of your observation by using tracking(_:). You should witness that the logs which start with tracked region now evolve in order to include the expected changes, and that you get the expected notifications.

If after all those steps (thanks you!), your observation is still failing you, please open an issue and provide a minimal reproducible example!

FAQ: Errors

Generic parameter ’T’ could not be inferred

You may get this error when using the read and write methods of database queues and pools:

// Generic parameter 'T' could not be inferred
let string = try dbQueue.read { db in
    let result = try String.fetchOne(db, ...)
    return result
}

This is a limitation of the Swift compiler.

The general workaround is to explicitly declare the type of the closure result:

// General Workaround
let string = try dbQueue.read { db -> String? in
    let result = try String.fetchOne(db, ...)
    return result
}

You can also, when possible, write a single-line closure:

// Single-line closure workaround:
let string = try dbQueue.read { db in
    try String.fetchOne(db, ...)
}

Mutation of captured var in concurrently-executing code

The insert and save persistence methods can trigger a compiler error in async contexts:

var player = Player(id: nil, name: "Arthur")
try await dbWriter.write { db in
    // Error: Mutation of captured var 'player' in concurrently-executing code
    try player.insert(db)
}
print(player.id) // A non-nil id

When this happens, prefer the inserted and saved methods instead:

// OK
var player = Player(id: nil, name: "Arthur")
player = try await dbWriter.write { [player] db in
    return try player.inserted(db)
}
print(player.id) // A non-nil id

SQLite error 1 “no such column”

This error message is self-explanatory: do check for misspelled or non-existing column names.

However, sometimes this error only happens when an app runs on a recent operating system (iOS 14+, Big Sur+, etc.) The error does not happen with previous ones.

When this is the case, there are two possible explanations:

  1. Maybe a column name is really misspelled or missing from the database schema.

    To find it, check the SQL statement that comes with the DatabaseError.

  2. Maybe the application is using the character " instead of the single quote ' as the delimiter for string literals in raw SQL queries. Recent versions of SQLite have learned to tell about this deviation from the SQL standard, and this is why you are seeing this error.

    For example: this is not standard SQL: UPDATE player SET name = "Arthur".

    The standard version is: UPDATE player SET name = 'Arthur'.

    It just happens that old versions of SQLite used to accept the former, non-standard version. Newer versions are able to reject it with an error.

    The fix is to change the SQL statements run by the application: replace " with ' in your string literals.

    It may also be time to learn about statement arguments and SQL injection:

    let name: String = ...
    
    
    // NOT STANDARD (double quote)
    try db.execute(sql: """
        UPDATE player SET name = "\(name)"
        """)
    
    
    // STANDARD, BUT STILL NOT RECOMMENDED (single quote)
    try db.execute(sql: "UPDATE player SET name = '\(name)'")
    
    
    // STANDARD, AND RECOMMENDED (statement arguments)
    try db.execute(sql: "UPDATE player SET name = ?", arguments: [name])
    

For more information, see Double-quoted String Literals Are Accepted, and Configuration.acceptsDoubleQuotedStringLiterals.

SQLite error 10 “disk I/O error”, SQLite error 23 “not authorized”

Those errors may be the sign that SQLite can’t access the database due to data protection.

When your application should be able to run in the background on a locked device, it has to catch this error, and, for example, wait for UIApplicationDelegate.applicationProtectedDataDidBecomeAvailable(_:) or UIApplicationProtectedDataDidBecomeAvailable notification and retry the failed database operation.

do {
    try ...
} catch DatabaseError.SQLITE_IOERR, DatabaseError.SQLITE_AUTH {
    // Handle possible data protection error
}

This error can also be prevented altogether by using a more relaxed file protection.

SQLite error 21 “wrong number of statement arguments” with LIKE queries

You may get the error “wrong number of statement arguments” when executing a LIKE query similar to:

let name = textField.text
let players = try dbQueue.read { db in
    try Player.fetchAll(db, sql: "SELECT * FROM player WHERE name LIKE '%?%'", arguments: [name])
}

The problem lies in the '%?%' pattern.

SQLite only interprets ? as a parameter when it is a placeholder for a whole value (int, double, string, blob, null). In this incorrect query, ? is just a character in the '%?%' string: it is not a query parameter, and is not processed in any way. See https://www.sqlite.org/lang_expr.html#varparam for more information about SQLite parameters.

To fix the error, you can feed the request with the pattern itself, instead of the name:

let name = textField.text
let players: [Player] = try dbQueue.read { db in
    let pattern = "%\(name)%"
    return try Player.fetchAll(db, sql: "SELECT * FROM player WHERE name LIKE ?", arguments: [pattern])
}

Sample Code

  • The Documentation is full of GRDB snippets.
  • [Demo Applications]
  • Open GRDB.xcworkspace: it contains GRDB-enabled playgrounds to play with.
  • groue/SortedDifference: How to synchronize a database table with a JSON payload

Thanks


URIs don’t change: people change them.

Advanced DatabasePool

This chapter has moved.

After Commit Hook

This chapter has moved.

Asynchronous APIs

This chapter has moved.

Changes Tracking

This chapter has been renamed [Record Comparison].

Concurrency

This chapter has moved.

Custom Value Types

Custom Value Types conform to the [DatabaseValueConvertible] protocol.

Customized Decoding of Database Rows

This chapter has been renamed [Beyond FetchableRecord].

Customizing the Persistence Methods

This chapter was replaced with [Persistence Callbacks].

Database Changes Observation

This chapter has moved.

Database Configuration

This chapter has moved.

Database Queues

This chapter has moved.

Database Pools

This chapter has moved.

Database Snapshots

This chapter has moved.

DatabaseWriter and DatabaseReader Protocols

This chapter was removed. See the references of DatabaseReader and DatabaseWriter.

Date and UUID Coding Strategies

This chapter has been renamed [Data, Date, and UUID Coding Strategies].

Dealing with External Connections

This chapter has been superseded by the [Sharing a Database] guide.

Differences between Database Queues and Pools

This chapter has moved.

FetchedRecordsController

FetchedRecordsController has been removed in GRDB 5.

The [Database Observation] chapter describes the other ways to observe the database.

This chapter has moved.

Guarantees and Rules

This chapter has moved.

Migrations

This chapter has moved.

NSNumber and NSDecimalNumber

This chapter has moved.

Persistable Protocol

This protocol has been renamed [PersistableRecord] in GRDB 3.0.

PersistenceError

This error was renamed to [RecordError].

Prepared Statements

This chapter has moved.

Row Adapters

This chapter has moved.

RowConvertible Protocol

This protocol has been renamed [FetchableRecord] in GRDB 3.0.

TableMapping Protocol

This protocol has been renamed [TableRecord] in GRDB 3.0.

Transactions and Savepoints

This chapter has moved.

Transaction Hook

This chapter has moved.

TransactionObserver Protocol

This chapter has moved.

Unsafe Concurrency APIs

This chapter has moved.

ValueObservation

This chapter has moved.


Articles

  • coming soon...