Upcoming changes to the Date Time system

This post is intended to be the more technical complement to the post on our main website blog.

Background

We recently discovered the following within our automated unit tests against our date and time system:

  1. Although we test gmt offset conversion to a timezone string via EEH_DTT_Helper_Test::test_get_timezone_string_from_gmt_offset, the primary purpose of that test is just to verify that there are no fatals. Primarily that list of offsets includes what were determined to be invalid offsets within php. However, this list is a hardcoded list of invalid offsets that are not necessarily invalid for every system environment.
  2. There is no where in our unit tests where we test setting the WordPress gmt_offset option to an offset, set the timezone_string option to an empty string, and then test the EE generated dates and times in the model system against what WP returns for it’s date methods.

As I began going about correcting the above issues, I started discovering other flaws within our code.

Before getting into the issues uncovered below, keep in mind that one of the reasons EE works with timezone strings as opposed to offsets is because that is primarily how PHP is oriented for its DateTime system. If you want to work with DateTime objects accurately in PHP, then you need to work with timezone strings not offsets.

The fact that WordPress allows users to set a GMT offset for times in their system may be fine for a general blog, but its a huge pain for an application around events because offsets have no location awareness and do not inherently track any DST that might exist for that location. This problem is magnified with software like Event Espresso.

Issue One: DST

EEH_DTT_Helper::get_timezone_string_from_gmt_offset was not considering that a set timezone_string in the database could be in DST.

In WordPress, when a call is made to get_option(‘gmt_offset’) there is actually a default hook added by WordPress core on the pre_get_option_gmt_offset which checks get_option('timezone_string') first and if that’s present, returns the offset for that timezone_string. So even if both gmt_offset and timezone_string are set on a WP install (which is possible, just not via the ui), then the offset on timezone_string gets returned.

Where this is problematic is that offsets are timezone agnostic, however, the offset for a timezone_string could vary depending on whether that timezone is currently in DST or not. So if this method was called with NO offset supplied, and the current set timezone_string on the site was in DST, then the resulting offset used for the initial search in timezone_name_from_abbr could result in an INCORRECT match.

I fixed this so that in this scenario if there is a timezone_string set in the db, we just return that instead of deriving it from what gets returned by WP as the offset.

Issue Two: Offset of +0

EEH_DTT_Helper::get_timezone_string_from_gmt_offset was not properly handling scenarios where an offset of 0 was supplied. For all purposes 0 === UTC so there is no need to go through all the logic that could return something that is 0 but currently only because the site is in DST. If client code is supplying an offset to get a timezone_string, then we assume not DST information.

So this was fixed so that now if this method explicitly receives an offset, the assumption is explicit that the given value has no DST information.

Issue Three: Historical Timezones

EEH_DTT_Helper:;get_timezone_string_from_gmt_offset was returning matches against historical timezones.

The PHP methods timezone_name_from_abbr and timezone_abbreviations_list contain not only current timezone data, but also historical timezone data. I discovered this when running some new unit tests we have setup in the working branch. For the offset -12, that would get flipped by the EE usage of these php methods to +12!!! The reason for this is because although -12 matched a timezone_string using the php methods, the current actual offset in real life for the matched timezone_string (when using that matched timezone_string) to instantiate a DateTimeZone object is +12. So the timezone_string matched historically had an offset of -12 but in current day no longer has that offset.

To fix this, I added some further checks on matched timezone strings to make sure that the current offset for that matched timezone_string equals the incoming offset. If they don’t match then that timezone_string match is rejected.

Doing this in turn revealed a number of offsets that are settable via the WordPress UI that have no equivalent currenttimezone_string matches in PHP! To complicate things, that list of invalid offsets is dynamic and depends on whether the server the site is on has up to date timezone offset maps which in turn is influenced by the server OS and/or PHP version installed. The fixes implemented account for this.

Issue Four: Inability to do certain tests.

I added some comments to the Model_Data_Translator_Test::test_prepare_conditions_query_params_for_models__gmt_datetimes that explains why certain offsets were removed from the list that gets tested. This was done intentionally, because the offsets that get adjusted by EE in the EEH_DTT_Helper::adjust_invalid_gmt_offset with the implemented fixes, are changed to the closest offset with a corresponding current (historically) timezone_string. This means that sometimes, the values for “now” saved to the db will NOT match the value for “now” that is generated by the WordPress current_time function because that function works with offsets directly and does not rely on php’s timezones at all (when the only time information on the WP site is gmt_offset which is what this test working against). So this means its pretty much impossible to reliably test comparisons for offsets we convert against the offset WP uses because this could vary between server environments.

Practically speaking, the tests that matter are still covering critical functionality.

What this means

If you are using any code that interacts directly (or indirectly) with our EEH_DTT_Helper::get_timezone_string_from_offset method (or any of the public methods it calls), you need to be aware of how this could change when things are released (as described above).

This also means that for sites using a GMT offset (as opposed to a timezone_string), the resulting values for saved dates and times in the database (when displayed) may not be as expected because the database values were converted using an incorrect offset to begin with.

To fix the above scenario on affected sites, there are a couple options:

  1. You can use the bundled tool that provides a UI for fixing the offset on all saved EE date and time values in the database.
  2. You can manually fix things for affected sites via using a variation of the query found here (note this query only affects datetime offsets, there are other values in the database that use EE_Datetime values which are affected that you’ll want to run the query against as well): https://github.com/eventespresso/ee-code-snippet-library/blob/master/mysql-queries/update-offset-on-all-datetimes.sql

The good news is that if you have sites that are not using UTC offsets but are using timezone_strings then they will not be affected by any of this.

To OOP or not to OOP ?

While performing a code review for some work I had done, I was asked the following question by Darren regarding a new method I had added to EEM_Registration called event_reg_count_for_statuses() :

 I wonder if it’d be better to have the first argument be an $Event ID instead of a full event object?

The method in question type hinted for an Event object:

then used that event object to populate part of a query’s where conditions like :

in which we would substitute ### for the Event ID using

Now normally it’s advisable to try and type hint for objects as much as possible, but keeping in mind that this method was basically a query helper on the EEM_Registration model used for retrieving information from the database, Darren’s suggestion was actually a better approach.

But why?

Function Parameters : Primitives or Objects ?

As stated above, it’s normally advisable to try and type hint for objects as much as possible because there are many benefits to be gained from passing an object instead of any other alternative. The following are some musings on why this is usually the case, written using the method discussed above as a point of reference.

Guaranteed Identity

If a function accepts a value that we can not type hint for, then we really don’t know what we are getting. Even if an int is received (as opposed to some other value)… how do we know that it is a valid Event ID ? It could be any random number, and no amount of validation logic ensuring that we receive an int can change that it could still be a random number. But by simply type hinting for an EE_Event, we have guaranteed that the value we pass along to the query is not only an int, but a valid Event ID. No further validation logic beyond the type hint is necessary but the method gains a lot of stability.

Efficiency and Memory Usage

In PHP all objects are passed by reference, whereas passing an int would require a new local variable to be created to hold a copy of an already existing variable, so more memory is required as there will be more entries on PHP’s memory stack. This isn’t a big deal if there are only two methods involved, and the one receiving the Event ID to perform the query isn’t passing that ID along anywhere else (other than the model). But if the chain of methods involved was bigger, and the Event ID was getting passed around a lot, then the memory usage for that one bit of data can grow quickly. Change your variable to an array of data and the inefficiency goes way way up. When I first started with EE working on EE3, the methods involved in the registration process would all pass IDs amongst themselves. This was horrifically inefficient because you would have a method receive an Event ID, then query for the Event object so that it could get other data for the Event, then pass the ID along to the next method, that would also query for the Event object so that it too could get other data for the Event. This sometimes happened 4-5-6 times in a row, meaning 4-5-6 queries for an Event object that was already in the system. And many of these methods would also save their changes to the Event object before passing its ID along, so a series of function calls could result in dozens of unnecessary additional queries (we can thank the previous lead dev Abel for that bit of amazing code /sarcasm). This is the worst case scenario and the main thing that I want to avoid, so if you are writing a method that requires an object, you should type hint for that object instead of simply requesting an ID. Then any other methods that require the use of your method can handle obtaining the object based on what data they have, assuming they don’t already have it.

Fail Early at the Source

So when a function type hints for an object, then it can only receive a valid instance of that object. Where the object was originally instantiated is inconsequential because PHP is only passing pointers around instead of the actual object, and if a valid object could not be created, the error would likely be discovered immediately. But when you pass primitive data types around, it can sometimes be difficult to determine where the data originated from.

If a function requires a valid valid Event ID but some invalid value is received instead, then we are farther away from the original source of the invalid data. By forcing the code that first obtained the int to retrieve an Event, any errors can be thrown at the source of the problem. I know I have experienced difficulties in finding the source of a problem because some variable was passed around through some filters and/or actions, and by the time the variable was discovered to be invalid, the source no longer appeared in the stack trace. This kind of follows the fail early philosophy. In an ideal application, the request data would be validated immediately and any invalid data would produce errors immediately. By passing an integer through the system, we can not do this. Of course it’s not always ideal to convert an int to an object just so that you can retrieve that same int again from the object (especially if a query was required to build the object).

Domain Driven Design

In articles discussing Domain Driven Design, you will often see images of a system represented by a series of rings or hexagons that surround each other. The innermost ring represents the domain where all of the business logic resides. The domain should be completely ignorant of the request as that interaction should be handled by one of the outer rings. Imagine the stability of a domain that ONLY type hints for objects as opposed to one that allows primitive data types to be passed around. The second would require significantly more validation and processing to ensure that the methods were receiving the data that they require. Whereas the domain that exclusively type hints for objects would be much much more stable, since the range of data its methods could accept would be greatly reduced. In this kind of system, your outer ring that interacts with the incoming request would be responsible for validating all of the incoming data before passing it to the domain. So your controller type classes that receive the user input or request data, would also need to have access to a model or repository that they could pull the appropriate object from before passing things along to the inner domain. In this situation, since event_reg_count_for_statuses() is a query helper method on the model, it’s appropriate for it to receive an ID instead of an object. So in this case, we decided not to OOP.

 

Testing Started on EE4 REST API Write Endpoints

On branch 9222, you can now insert, update, and delete EE4 model data (eg events, venues, registrations, payments… almost all EE4 data) via the REST API interface.

This will facilitate tasks like event registration via a single-page javascript application, updating registration and event data from a mobile app, and synchronizing data from a 3rd party web service to EE4.

It is currently undergoing internal testing, but we invite testing and feedback from others. Please give it a try, and open a github issue if you have suggestions (or heck, even if you have no suggestions, that’s good to hear too; it will help us get it released sooner).

Check the branch out here, and read the documentation here.

5 Tips for Contributing Code to Event Espresso

I just posted 5 Tips for Contributing to Open Source Software like Event Espresso on the main site’s blog. I hope it will help new contributors to Event Espresso, but I think the suggestions apply generally to all open source.

Addition of json schema to REST API

With the release of 4.9.26.p of Event Espresso core, our REST API now exposes a json schema for each of the collection endpoints.  You can read more about it in our internal documentation.

New form input class and validation strategy

Event Espresso 4.8.41 adds a new form input, EE_Select_Reveal_Input (which can show/hide related form sections), and a new validation strategy, EE_Conditionally_Required_Validation_Strategy (which can make the related input conditionally required, based on the value of a related form input). Please see the updates in our forms documentation for more details.

REST API addition in Event Espresso 4.8.40, new backwards-compatibility policy

Due to feedback from WordPress REST API core developer Daniel Bachhuber, we have made a change to our backwards-compatibility policy in the EE4 REST API. Please read the details here, but the summary is that we will only be adding new EE4 REST API versioned namespaces (eg v4.8.36) when we want to introduce a significant change to existing behaviour; not every time we add a new feature.

Along with that, we added a new calculated field onto the existing EE4 REST API versioned namespace (v4.8.36) in Event Espresso 4.8.40.p: registrations’ datetime_checkin_stati. This should be helpful for easily determining if a registration is checked into, or out of, their applicable datetimes.

Lastly, a REST API changelog was added which will log changes affecting consumers of the EE4 REST API. This should be helpful for REST API consumers who aren’t necessarily as concerned about changes to Event Espresso’s internals or web interface.

Changes with Developer Documentation

Developer documentation is one of those things every team knows is good to put out there for developers working on your platform, but also tends to be one of the last things on the priority list for getting done.  There are various reasons for this:

  • Maintaining the site hosting the documentation takes too much time.
  • The tools for doing the documentation are meh.
  • The code changes over time thus easily making existing documentation stale, and having to go through all pre-existing work and updating it is a pain.
  • We’d rather just write code.

However, at Event Espresso, we realize that having good quality documentation directed to developers building on our platform is important because, among other things:

  • It helps us think through features and systems we are writing about (if its too hard to explain to others, then maybe its too complex)
  • It removes some friction third party developers can experience when learning to work with Event Espresso for the first time.
  • It helps give some explanation for “Why we did it this way?” for developers who have trouble grasping some seemingly arcane way we chose to do things in the code.
  • It helps developers find easier ways to integrate with and work with Event Espresso as they build their own extensions and custom code for their clients.

When I first threw up developer.eventespresso.com, it was a weekend project that I launched fairly quickly to hopefully get the ball rolling.  So I setup a WordPress site, loaded a few plugins that I thought would help with organizing things and ran with it.  For the most part it’s served our team well, but it really hasn’t helped remove the friction outlined at the beginning of this post.  Plus, as a team, we’ve never really been happy with the utility of actually finding things on this site.  As I perused the list of stuff we still have to document, I realized that long-term, the structure of this site wasn’t viable.

With that in mind I realized we needed something that:

  • would have 0 maintenance overhead.  No theme adjustments to make, no website updates to manage etc. etc.   We wanted to be able to just focus on content and not the presentation (save TIME).
  • still allowed us to own our data/content and quickly move it around if necessary.
  • has EASY versioning so that the documentation developers viewed was always relevant for the current code they were working with.
  • has EASY authoring tools so when we wrote this documentation we’d spend less time wrangling with formatting code etc, and can focus on just getting things written and out there.
  • has great organization and search capabilities for developers to actually find stuff.
  • is not too expensive.
  • is EASY for developers to discover.

With that in mind, I started looking for things that might help us accomplish those goals.  I first came across readme.io and was swept in with the beautiful design and easy content management (especially their REST API tools, they rock!).  However, although there is an export utility, I still didn’t like that the exports (although in markdown) were polluted with custom blocks/shortcodes used for their platform.  If we ever had to move our documentation elsewhere, converting it could be a pain and take unnecessary time.

I also thought of using something like Jekyll, but really, although its cool and all, there is still some maintenance overhead involved.

Then in some internal communications, one of our devs (Brent) suggested, “Hey, why don’t we just add a docs folder to our repo and throw all our dev docs in there?”  Eureka.  So freakin’ obvious, but so easily overlooked.  And so I ran with it.

Why is this good for you, the developer working with Event Espresso?

  • ALL of our documentation is now easily discoverable and accessible right along with our code in the docs folder (you can view it on github – which parses markdown really nice).
  • Our documentation will pretty much always be correct for the branch you view it in.  Going forward, when our dev team writes docs about a new api/feature we introduce/deprecate/update, we can do it IN the branch its introduced.
  • Github search is way better than WordPress search.  Plus as a bonus, now when you search through Event Espresso core on github, not only will you get results for code returned, but ALSO docs!  Groovy.
  • Have questions as a result of the docs?  Open an issue and link to the doc right there.
  • Read the doc and find something incorrect, or something you think could be expanded?  Submit a pull request!  Yeah, you can help the documentation get better (really easily).
  • With a lot of the friction to writing documentation removed, its much more likely we’ll get more useful documentation out there for you.

Why is this good for our team?

  • ZERO maintenance.  No worrying about presentation of the docs, or wrestling with formatting, keeping plugins up-to-date etc.
  • Less struggle with versioning and docs becoming stale.  Now, whenever something changes in a branch we’re working on, we can just update the docs in that branch and when it gets released (merged to master branch) all the docs will be up-to-date.
  • Focus on content, rather than formatting etc.  Markdown is ridiculously easy to write with, and github flavoured code styling is a breeze.  Our docs can be written right in our IDE!
  • not extra cost (free)

Won’t the docs folder add space to the release package?

No.  Currently we have a build process via our grunt bot that removes folders and files we don’t include in zip builds for releases.  Literally a one-liner to ensure that the new docs folder gets removed from those builds as well.

What’s going to happen to developer.eventespresso.com (and the existing links to documentation)?

We’re still going to keep developer.eventespresso.com up for our developer focused blog.  Having a blog is still useful for timely announcements about changes that we can link to the new static docs.  So if you are currently a subscriber to our blog feed, that’s still going to be an excellent way for you to receive developer focused news about Event Espresso.

As well, the github address for our new documentation home is kind of long.  So developer.eventespresso.com/docs is what we will use to link to it.

Finally, we’ve 301’d all old links to documentation so it points to the new home of that item in our docs folder on github.  This means that if you’ve been using any of those links internally with your teams or on your own sites, they should still work just fine, they’ll just redirect to the new home for that doc.

Wrap Up

I hope this change ends up being as good as I think it will be.  Let us know what you think in the comments.

Changes to the Messages system coming in EE 4.9

We’re currently in the final round of testing of the next major release of Event Espresso, 4.9. In this release, there was a significant refactor of the messages system so this post is intended to give a heads up about all the changes coming.

[notification type=”alert-info” close=”false” ]This post gives an overview of the changes to the messages system.  For more details you can read an Overview of the Messages System and a Code Flow diagram.[/notification]

Introduction of a Message Queue System

This new system tracks and prioritizes when messages are generated and when they are sent. When messages are triggered, they are no longer generated immediately and sent on the same request. Instead, they enter into the queue and all processing happens on separate requests.

Most messages are thus persisted to the database because of the queueing system. Some still aren’t if they are browser/pdf specific (i.e. invoice/receipt/tickets) but this could change in future iterations.

This new system resulted in a rewrite of much of the controllers and business logic of the messages system which of course introduced a number of new classes. One of the notable changes is the introduction of the EE_Message entity which replaces the old way of passing around a stdClass object to represent a generated message. You can read more about this and the other classes introduced in the new documentation

Changes to Triggers and Message Priority

In the messages system, a trigger is a way of referring to something that kicks off the message process. That could be when the checkout has approved a registration or when a user clicks a special link in the list table (i.e. resent message link found in the Registrations List Table).

The new system makes no changes to how existing triggers function in terms of expected results, however there are significant changes to how a trigger is processed.

All messages that are not for messengers flagged “send now” start off as a “MIC” status message object and are “queued” for generation, then saved to the db.

In the more detailed documentation, we outline all the various stati for the EE_Message object, basically every message starts off as MIC (or incomplete) except for those flagged “send now”.

The system has a method for messengers to indicate that the message is to be generated and sent immediately on the same request. messengers that currently set this flag to true are html and pdf messengers as we want those mesengers to trigger immediately.

If a message is not for a send_now messenger, then the next thing the messages look for is the priority on the message type. Currently there are three possible priorities a message type can have:

  • EEM_Message::priority_high: indicates a message should be generated and sent as soon as possible.
  • EEM_Message::priority_medium: indicates a message should be generated as soon as possible but can be queued for sending.
  • EEM_Message::priority_low: indicates a message should be queued for generating.

Currently, all payment message types are EEM_Message::high_priority. All registration message types are EEM_Message::medium_priority and the newsletter message type is the only current EEM_Message::low_priority message type.

The important thing to remember about message type priorities is that everything happens on a separate request, the priority just indicates how SOON it happens on a separate request.

So that means if a message type is EEM_Message::priority_high then the messages system will save the MIC message (with the bare amount of info needed for generation) and will initiate a separate non-blocking request to begin the generation. Thus it will be generated immediately but on a separate request from the trigger. Then on that separate request, the priority check indicates that another non-blocking request is to be done to SEND that message immediately. The end result is that the end user that triggered the message should see EEM_Message::priority_high messages show up fairly rapidly (with caveats, you’ll see that in the next point about scheduling).

If a message type is EEM_Message::priority_low then that means that the MIC EE_Message is saved to the db, however there is no separate request initiated and it’s just left there for the next batch schedule to kick in… which brings me to the next point.

Batch Schedules

The new messages system sets up two wp-cron schedules on activation. Both schedules are currently set to a 10 minute interval. However, this can be changed using a filter:

The first cron schedule will trigger a query to retrieve all non-generated messages sorted by priority up to a set “batch” amount (which currently is at 50). Then the messages system will generate those messages change the status to MID (Idle, ready for sending), save them, and depending on the priority level, either initiate another separate request immediately to start a batch send OR just exit leaving them queued for the next batch send schedule to fire.

The second cron schedule will trigger a query to retrieve all non-sent messages (ordered by priority) then proceed with sending them with the appropriate messenger. Currently this schedule will do 50 messages at a time (it’s filterable, and something we can adjust if needed).

The important thing to remember about how these schedules work is this:

  • they don’t do ALL messages for the given type, they only do a set limit (batch limit) for each scheduled request. This limit is filterable.
  • While a schedule is executing, there is a lock set (for that specific schedule, generation or sending) to prevent any other scheduled message requests from executing while processing is happening. This means if a batch generation request is running, then any other incoming requests for batch generations will be prevented from executing due to the lock. The locks themselves have an expiry set (defaults to one hour, but is filterable), so if something goes wrong and the lock isn’t removed (server crash or something), then this prevents a complete lockout.
  • The above is an important point, because this means that although when a EEM_Message::high_priority message is triggered, it immediately initiates a request to generate/send IF there is a lock due to a scheduled batch request running, then it will not get generated/sent and instead wait until the next batch. The opposite is true as well, if there is a request initiated right away and it is running, and then a scheduled batch request fires, it will not complete because of the lock and will resume on the next firing. What’s important to remember though is that if a EEM_Message::high_priority message triggers an immediate generation/sending on a separate request, and there is no current lock, it executes the same BATCH process as a regular schedule. So that means that up to the batch limit will be processed for that request.
  • Batches retrieved from the db are always ordered by priority. That means higher priority messages will always be generated/sent out before lower priority messages.
  • The batch send method not only has a limit on the batch retrieved from the db, but also has a rate limit for how many can be sent during a one hour time period and this rate limit currently applies to all messengers (in a future iteration I’m likely going to have it set per messenger). Currently the rate limit is 200/hour. That means that in any given hour period, the maximum messages that will get sent is 200/hour and if that limit is reached, then no messages will get sent until the next hour period hits. This number was arrived at from researching what a number of web hosts have set as their email sending rate limits and set a number that is conservative for the average. Keep in mind this is filterable so people using a really good web host (or an email service like mandrill) can up this rate limit if needed. I think for most average users tho, 200/hour is plenty.

Messages Admin changes

Message Activity

The messages refactor brings a brand new list table that displays all saved messages and their status. Currently, the only time you will see html or pdf messenger related messengers in this list table is if there is an error (resulting in MFL status) so for the most part you will only see ‘email’ messenger messages (this will likely change in a future iteration though because we may find we want to save generated invoices/receipts)

The Message Activity List table serves the following purposes:

  • as an archive of all messages that are in various stages (ready to be generated, all the way to sent).
  • failed messages show up here and the system makes every attempt to save a useful error message with the failed message to assist with troubleshooting why message generation/sending failed.
  • admins can trigger immediate generation and/or sending via this list table.
  • admins can resend specific messages from this list table.
  • the table is used for filtered results by registration, transaction, event, contacts etc. This helps admin’s answer questions like, “What are all the messages that have been sent for John Smith?”

Please note:

Each row in the message activity list table has the status indicated via the colored strip in the leftmost column. This will help you understand what stage a message is at. Here’s a screenshot of the legend that shows what each color represents. You can also get the status by hovering your cursor over the status column.

legend

Admin message trigger links

The following behaviour should be expected:

  • All existing trigger links (i.e. resend registration message from registration list table) should work as they currently do and result in a new generated message. The only thing that changes is that the message will not be generated/sent on the same request and follows the new priority rules (as mentioned earlier). I also changed the success messages that show after executing that trigger. You can see the status of messages sent this way via the new Message Activity list table.
  • Currently on the transaction list table, registration list table, and event list table there is a new action for each row for viewing all messages related to the record the action was triggered from:

reg list table

The above is an example from the transaction list table. When you click the “megaphone icon”, it will take you to the Message Activity Table but only showing the messages related to the transaction row you clicked the link in. If you are clicking from a registration list table, then the resulting messages shown in the messages activity list table will be related to the registration you clicked, if from the event list table then the messages activity list table will show all messages sent related to the event row you clicked from.

For resending already sent messages without regenerating the message, that will always be done from a Message Activity list table. Currently the only place that table is found is on the Messages Admin Page route, but in other tickets I’ll be adding a list table to the contact details, registration details, and transaction details pages. That way event admins can resend already generated message(s) for those contexts at will.

REST API additions and changes in 4.8.36

In Event Espresso 4.8.36 we’re releasing a new EE namespace for the EE4 REST API, with a few new changes:

  • totals in headers, so that you can easily get a count of items in a set. Eg if you want to count how many events are upcoming, before you would need to query for all events and count them yourself. Now the header “X-WP-Total” is included in the response and contains the count of all items in the collection, ignoring any limit set on the query. See the related documentation on response headers
  • calculated fields, optional extra fields which can be included in responses which contain information that’s otherwise available but tricky to figure out. Eg, in order to calculate the total registrations made for an event, instead of needing to manually query for all the registrations and count them yourself, you can instead add calculate=spots_taken onto a GET request for events, and the count of approved registrations for the event will be included as part of each event returned in the collection. See the related documentation on calculated fields
  • new representation for infinity: before whenever a response item needed to represent infinity (eg a ticket with no limit)  we represented it with -1. Going forward we will be using null to represent infinity seeing how it’s less ambiguous than -1 (because there are times a field could be negative or infinity). This is only a change in the new 4.8.36 endpoints and any future releases of the EE4 REST API; endpoints in the namespaces 4.8.34, 3.8.33 and 4.8.29 will continue to use -1 for infinity for backwards compatibility. See the note on gotchas, which include this change.

As  usual, let us know what you think!