Most of this tutorial is about the CQL language itself but here we must diverge a bit. The purpose of the result set features of CQL is to create a C interface to SQLite data. Because of this there are a lot of essential details that require looking carefully at the generated C code. Appendix 2 covers this code in even more detail but here it makes sense to at least talk about the interface.
If we have this simple stored procedure:
We've created a simple data reader, this CQL code will cause the compiler to generated helper functions to read the data and materialize a result set.
Let's look at the public interface of that result set now considering the most essential pieces.
Let's consider some of these individually now
This declares the data type for
read_foo_result_set and the associated object reference
As it turns out the underlying data type for all result sets is the same, only the shape of the data varies.
The result set fetcher method gives you a
read_foo_result_set_ref if it succeeds. It accepts the
id_ argument which it
will internally pass along to
read_foo(...). The latter function provides a
sqlite3_stmt* which can then be iterated in the fetcher.
This method is the main public entry point for result sets.
Once you have a result set, you can read values out of it.
That function tells you how many rows are in the result set.
For each row you can use any of the row readers:
These let you read the
id of a particular row, and get a
cql_int32 or you can read the nullable boolean,
read_foo_get_b_is_null function first to see if the boolean is null and then
to get the value. Finally the string can be accessed with
read_foo_get_t. As you can see there is a
simple naming convention for each of the field readers.
Note: The compiler has runtime arrays that control naming conventions as well as using CamelCasing. Additional customizations may be created by adding new runtime arrays into the CQL compiler.
Finally, also part of the public interface, are these macros:
These use the CQL runtime to hash a row or compare two rows from identical result
set types. Metadata included in the result set allows general purpose code to work for
every result set. Based on configuration, result set copying methods can also
be generated. When you're done with a result set you can use the
method to free the memory.
Importantly, all of the rows from the query in the stored procedure are materialized immediately and become part of the result set. Potentially large amounts of memory can be used if a lot of rows are generated.
The code that actually creates the result set starting from the prepared statement is always the same. The essential parts are:
First, a constant array that holds the data types for each column.
All references are stored together at the end of the row, so we only need the count
of references and the offset of the first one to do operations like
on the row.
Lastly we need metadata to tell us count of columns and the offset of each column within the row.
Using the above we can now write this fetcher
Results Sets From
out keyword was added for writing procedures that produce a single row result set. With that, it became possible to make any single row result you wanted, assembling it from whatever sources you needed. That is an important
case as single row results happen frequently and they are comparatively easy to create and pass around using C
structures for the backing store. However, it's not everything, there are also cases where full flexibility is needed
while producing a standard many-row result set. For this we have
out union which was dicussed fully in Chapter 5. Here we'll discuss the code generation behind that.
Here’s an example from the CQL tests:
In this example the entire result set is made up out of thin air. Of course any combination of this computation or data-access is possible, so you can ultimately make any rows you want in any order using SQLite to help you as much or as little as you need.
Virtually all the code pieces to do this already exist for normal result sets. The important parts of the output code look like this in your generated C.
We need a buffer to hold the rows we are going to accumulate; We use
cql_bytebuf just like the normal fetcher above.
We need to be able to copy the cursor into the buffer and retain any internal references
Finally, we make the rowset when the procedure exits. If the procedure is returning with no errors the result set is created, otherwise the buffer is released. The global
some_integers_info has constants that describe the shape produced by this procedure just like the other cases that produce a result set.
The operations here are basically the same ones that will happen inside of the standard helper
cql_fetch_all_results, the difference is of course that you write the loop manually and therefore have
full control of the rows as they go in to the result set.
In short, the overhead is pretty low. What you’re left with is pretty much the base cost of your algorithm. The cost here is very similar to what it would be for any other thing that make rows.
Of course, if you make a million rows, well, that would burn a lot of memory.