This release cycle is much shorter than the previous ones. It reflects our new approach at EdgeDB where the goal is to provide improvements at a steady regular pace rather than in big, but infrequent batches. Going forward we expect to maintain this shorter release cadence focusing on a few features at a time.
To play with the new features, install the CLI using our installation guide and initialize a new project.
$
edgedb project init
Local and Cloud instances
To upgrade a local project, first ensure that your CLI is up to date with
edgedb cli upgrade
. Then run the following command inside the project
directory.
$
edgedb project upgrade
Alternatively, specify an instance name if you aren’t using a project.
$
edgedb instance upgrade -I my_instance
The CLI will first check to see if your schema will migrate cleanly to EdgeDB 4.0. If the upgrade check finds any problems, it will report them back to you.
Hosted instances
To upgrade a remote (hosted) instance, we recommend the following dump-and-restore process.
EdgeDB v4.0 only supports PostgreSQL 14 (or above). So check the version of PostgreSQL you are using before upgrading EdgeDB. If you’re using Postgres 13 or below, you should upgrade Postgres first.
Spin up an empty 4.0 instance. You can use one of our deployment guides.
Under Debian/Ubuntu, when adding the EdgeDB package repository, use this command instead:
$
echo deb [signed-by=/usr/local/share/keyrings/edgedb-keyring.gpg] \
https://packages.edgedb.com/apt \
$(grep "VERSION_CODENAME=" /etc/os-release | cut -d= -f2) main \
| sudo tee /etc/apt/sources.list.d/edgedb.list
Use this command for installation under Debian/Ubuntu:
$
sudo apt-get update && sudo apt-get install edgedb-4
Under CentOS/RHEL, use this installation command:
$
sudo yum install edgedb-4
In any required systemctl
commands, replace edgedb-server-3
with
edgedb-server-4
.
Under any Docker setups, supply the 4.0
tag.
Take your application offline, then dump your v3.x database with the CLI
$
edgedb dump --dsn <old dsn> --all --format dir my_database.dump/
This will dump the schema and contents of your current database to a
directory on your local disk called my_database.dump
. The directory name
isn’t important.
Restore the empty v4.x instance from the dump
$
edgedb restore --all my_database.dump/ --dsn <new dsn>
Once the restore is complete, update your application to connect to the new instance.
This process will involve some downtime, specifically during steps 2 and 3.
EdgeDB 4.0 adds full-text search functionality packaged
in the fts
module. By adding an fts::index
to an object type you can
transform any object into a searchable document:
type Item {
required available: bool {
default := false;
};
required name: str;
required description: str;
index fts::index on (
fts::with_options(
.name,
language := fts::Language.eng
)
);
}
The fts::index
indicates to EdgeDB that this object type is a valid target
for full-text search. The property that will be searched as well as the
language is provided in the index.
The fts::search()
function allows searching objects for a particular
phrase:
db>
select fts::search(Item, 'candy corn', language := 'eng');
{ ( object := default::Item {id: 9da06b18-69b2-11ee-96b9-1bedbe75ad4f}, score := 0.30396354, ), ( object := default::Item {id: 92375624-69b2-11ee-96b9-675b9b87ac70}, score := 0.6079271, ), }
The search results are provided as a tuple containing the matching document object and a score. Higher score indicates a better match. So we can use these values to order the results:
db> ... ... ... ...
with res := (
select fts::search(Item, 'candy corn', language := 'eng')
)
select res.object {name, score := res.score}
order by res.score desc;
{ default::Item {name: 'Candy corn', score: 0.6079271}, default::Item {name: 'Canned corn', score: 0.30396354}, }
You can only have at most one fts::index
defined for any particular type.
So if there are multiple properties that should be searchable, they can all be
specified in that one index:
type Item {
required available: bool {
default := false;
};
required name: str;
required description: str;
index fts::index on ((
fts::with_options(
.name,
language := fts::Language.eng
),
fts::with_options(
.description,
language := fts::Language.eng
)
));
}
The above schema declares both name
and description
as searchable
fields:
db> ... ... ... ...
with res := (
select fts::search(Item, 'trick or treat', language := 'eng')
)
select res.object {name, description, score := res.score}
order by res.score desc;
{ default::Item { name: 'Candy corn', description: 'A great Halloween treat', score: 0.30396354, }, }
We’ve made it easier to work with ranges by adding a multirange datatype. Multiranges consist of one or more ranges and allow
expressing intervals that are not contiguous. Multiranges are automatically
normalized to contain non-overlapping ranges that are ordered according to
their boundaries. All the usual range operators and functions like
overlaps
or contains
work with any combination of ranges and
multiranges, providing more flexibility in expressions.
db>
select multirange([range(8, 10)]) + range(1, 5) - range(3, 4);
{[range(1, 3), range(4, 5), range(8, 10)]}
Starting in rc1, the EdgeQL over HTTP and GraphQL endpoints support (and by default require) authentication.
By default, HTTP Basic Authentication is used.
Full details are available in the EdgeQL over HTTP documentation.
This is a backwards-incompatible change. It is possible to opt-in to the old behavior, but not recommended.
The new auth
extension adds a full authentication service that runs
alongside your database instance, saving you the hassle of having to learn and
implement the intricacies of OAuth or secure password storage.
OAuth Integration: Seamlessly authenticate with GitHub, Google, Apple, and Azure/Microsoft.
Email & Password Support: Includes robust email+password authentication with reset password functionality.
Easy Configuration: Set up via our configuration system.
Hosted UI: Use our hosted authentication UI to quickly add authentication to your app.
When a user signs up, we create a new object of type ext::auth::Identity
,
which you can link to in your own schema. We then provide you with a token that
can be set as the global ext::auth::client_token
which will automatically
populate another computed global called ext::auth::ClientTokenIdentity
which you can use directly in your access policies, or in your own globals.
using extension auth;
module default {
global current_customer := (
assert_single((
select Customer
filter .identity = global ext::auth::ClientTokenIdentity
))
);
type Customer {
required text: str;
required identity: ext::auth::Identity;
}
type Item {
required sku: str;
required description: str;
}
type Cart {
required customer: Customer;
multi items: Item {
quantity: int32;
};
access policy customer_has_full_access
allow all
using (global current_customer ?= .customer);
}
}
Here’s an example query using the TypeScript client:
import { createClient } from "edgedb";
declare const tokenFromAuthServer: string;
const client = createClient()
.withGlobals({
"ext::auth::client_token": tokenFromAuthServer
});
const carts = await client.query(`select Cart { * };`);
We’ve added pgcrypto to our extensions. This exposes
digest
, hmac
, gen_salt
and crypt
functions for your hashing,
encrypting and salting needs.
db>
select ext::pgcrypto::digest('encrypt this', 'sha1');
{b'\x05\x82\xd8YLF\xe7\xd4\x12\x91\n\xdb$\xf1!v\xf9\xd4\x89\xc4'}
db>
select ext::pgcrypto::gen_salt('md5');
{'$1$FjNlXgX7'}
Standard algorithms are “md5”, “sha1”, “sha224”, “sha256”, “sha384” and “sha512”. Moreover, any digest algorithm OpenSSL supports is automatically picked up.
The pg_trgm extension provides functionality used to determine string similarity, which makes it a good text search alternative for some use cases:
db> ... ...
with x := {'hello world', 'word hero', 'help the world'}
select res := (x, ext::pg_trgm::word_similarity(x, 'hello world'))
order by res.1 desc;
{('hello world', 1), ('help the world', 0.5), ('word hero', 0.35714287)}
We’ve made a few internal changes affecting performance, the biggest of which was rewriting EdgeQL parser in Rust. Overall we’ve manged to reduce the baseline server memory consumption by 40%.
Add new style of if
/then
/else
syntax.
(#6074)
Many people find it more natural to write “if … then .. else …” for conditional expressions because it mirrors the conditional statement from other familiar programming languages.
db>
select if count(Object) > 0 then 'got data' else 'no data';
{'got data'}
Support conditional DML. (#6181)
It can be useful to be able to create, update or delete different objects based on some condition:
with
name := <str>$0,
admin := <bool>$1
select if admin then (
insert AdminUser { name := name }
) else (
insert User { name := name }
)
A different use-case of conditional DML is using a coalesce
operator to express things like “select or insert if missing”:
select (select User filter .name = 'Alice') ??
(insert User { name := 'Alice' });
Add contains
for JSON so that it can be used with pg::gin
index.
(#5910)
Add to_bytes()
to convert str
into bytes
using UTF-8 encoding.
(#5960)
Add to_str()
to convert bytes
into str
using UTF-8 encoding.
(#5960)
Add enc::base64_encode
and enc::base64_decode
functions.
(#5963)
db>
select enc::base64_encode(b'hello');
{'aGVsbG8='}
db>
select enc::base64_decode('aGVsbG8=');
{b'hello'}
Add when
clause to triggers to enable them to be conditional.
(#6184)
Allow empty arrays without cast in insert
.
(#6218)
Change how globals are passed in GraphQL queries. (#5864)
Instead of using a separate globals
field (which is non-standard), use
variables
to add a __globals__
object to pass the global variables.
In order to ensure backwards compatibility, the old way of passing globals is still valid. In case both the new and the old methods are used the globals being passed in them must match or else the query will be rejected.
Fix GraphQL bug with objects without editable fields. (#6056)
Fix GraphQL issues with deeply nested modules. (#6056)
Fix GraphQL __typename
for non-default modules and mutations.
(#6035)
Fix GraphQL fragments on types from non-default module. (#6035)
Fix a casting bug for some aliased expressions. (#5788)
Fix cardinality inference of calls to functions with optional
args.
(#5867)
Fix the undefined order of columns in SQL COPY
.
(#6036)
Fix drop of union links when source has a subtype. (#6044)
Fix link deletion policies on links to union types. (#6033)
Fix deletion issues of aliases that use with
(#6052)
Make id
of schema objects stable.
(#6058)
Allow computed pointers on types to omit link/property kind specification. (#6073)
Support listen_ports
greater than 32767.
(#6194)
Fix migration issues with some overloaded indexes/constraints in SDL. (#6172)
Support DML on right hand side of coalesce expressions. (#6202)
Fix cardinality inference of polymorphic shape elements. (#6255)
Fix migration issue involving property defaults. (#6265)
Fix bugs in set ... using
statements with assert_exists
and similar.
(#6267)
Fix cardinality bug when a property appears in multiple splats. (#6255)
Make comparison operators non-associative (#6327)
Fix an obscure parser bug caused by constant extraction (#6328)
Cap the size of sets in multi
configuration values to 128
(#6402)