You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Energy Management Systems (EMSs) increase energy efficiency by computing optimized operational schedules and executing these on devices and systems
4
-
* they require forecasting and optimization algorithms to effectively operate
5
-
* The development costs of target-specific algorithms generally outweigh the monetary savings they generate after deployment.
6
-
* Therefore, the ESG framework provides generic forecasting and optimization algorithms that are centrally provided as web services, drastically reducing development costs.
7
-
* This section explains how to apply the ESG framework and gives an overview of the basic technical concepts.
8
-
* For more in-depth information especially regarding the discussion of the approach, please refer to the published `research article <https://de.overleaf.com/project/6565c3491f8923df81a997ac>`__.
3
+
4
+
This section explains how to apply the ESG framework and gives an overview of the basic technical concepts. For more in-depth information especially regarding the discussion of the approach, please refer to the published `research article <https://de.overleaf.com/project/6565c3491f8923df81a997ac>`__.
9
5
10
6
Framework
11
7
---------
12
-
* The figure shows the basic architecture of an EMS that integrates the ESG framework.
8
+
The below figure shows the basic architecture of an EMS that integrates the ESG framework.
* developers can specify the format of the input data for calls to /request/ and /fit-parameters/ and of the output returned by the corresponding /result/ API methods
17
+
The API of the framework is based on the RESTful principles with authorization being handled through JSON Web Tokens (JWTs). It enables the
18
+
19
+
* retrieval of a forecast or optimized schedule and
20
+
* fitting of system specific parameters of a service.
21
+
22
+
The intended interaction of the EMS with the API, consists of the following calls in the given order:
23
+
24
+
1. Issue a POST call to the ``/{version}/request`` or to the ``/{version}/fit-parameters/`` endpoint.
25
+
2. Poll the GET endpoint ``/{version}/request/{task_ID}/status/`` or ``/{version}/fit-parameters/{task_ID}/status/`` respectively for the status of the job initiated in step 1. The task_ID is returned in the response of the POST call.
26
+
3. Once the status is ``ready``, issue a GET call to the ``/{version}/request/{task_ID}/result/`` or ``/{version}/fit-parameters/{task_ID}/result/`` endpoint to retrieve the result of the computation.
27
+
28
+
The format of the input data for calls to the ``/request/`` and ``/fit-parameters/`` endpoints and of the output returned by the corresponding ``/result/`` endpoints has to be specified by the developer. However, the ESG package provides building blocks for data models (see section :ref:`data_model`) to reduce the time spent on defining them.
31
29
32
30
Service Components
33
31
------------------
34
-
1. Base: Containing the components necessary for executing the code of the service.
35
-
2. Service Framework: Containing all components generic to all services.
36
-
3. Service Specific: Containing all components a service provider must implement to derive
37
-
a functional service.
32
+
A functional service that implements a forecasting or optimization algorithm consists of three categories of components:
33
+
34
+
1. **Base**: Contains the components necessary for executing the code of the service.
35
+
2. **Service Framework**: Contains all components generic to all services.
36
+
3. **Service Specific**: Contains all components a service provider must implement to derive a functional service.
38
37
39
38
.. figure:: graphics/service_components.png
40
39
:alt:Service Components
41
40
:align:center
42
41
43
42
Service Components
44
43
45
-
* worker enables concurrent processing of requests, effectively decoupling the API from interacting with forecasting or optimization code
46
-
* worker and API should be operated in distinct processes to enhance performance
47
-
* connected through a message broker
44
+
The worker, which is the second component within the service framework, enables concurrent processing of requests, effectively decoupling the API from interacting with forecasting or optimization code. To enhance performance, the worker and API should be operated in distinct processes connected only through a message broker.
* For each valid POST request, API publishes a task on the message broker and assigns it an ID
56
-
* A worker process fetches the task and starts computing the result
57
-
* The worker regularly publishes status updates on the processing progress to the broker and finally the result of the computation
58
-
* If the /status/ endpoint is called, the API fetches the latest update regarding the corresponding task and returns the information to the client
59
-
* If the /result/ endpoint is called, the API fetches the result from the broker and returns it to the client
60
-
* garbage collector deletes task-related data from the broker that are likely not required anymore (DOES THE USER DEFINE WHEN A TASK IS DELETED?)
52
+
The illustration above shows the full architecture of a service. The API is the entry point for the client software and interacts with the worker through the message broker. For each valid POST request, the API publishes a task on the message broker and assigns it a unique ID. A worker then fetches the task and starts computing the result. It regularly publishes status updates on the processing progress to the broker and finally the result of the computation. If the EMS calls the ``/status/`` endpoint, the API fetches the latest update regarding the corresponding task and returns the information to the client. Accordinngly, if the EMS calls the ``/result/`` endpoint, the API fetches the result from the broker and returns it to the client.
53
+
To ensure the message broker is kept clean, a garbage collector deletes task-related data from the broker that are likely not required anymore.
61
54
62
55
Operation Concepts
63
56
------------------
64
-
* Processes of the service need to be wrapped in Docker containers
65
-
* Commercial applications should select Kubernetes as orchestrator of the containers and academic applications Docker swarm
66
-
* Supportive applications required
67
-
* gateway (aka ingress or reverse proxy): makes the API containers accessible to the EMS and balances the distribution of requests across the API containers. Ideally, takes care of encrypting the communication between the client and the service through HTTPS.
68
-
* Identity Provider (IdP): Issues the JWT tokens to the client software using the OpenID Connect (OIDC) protocol (e.g. Keycloak)
57
+
All components, also referred to as processes, of the service need to be wrapped in individual Docker containers in order to enable parallel execution of multiple instances of the same process and distribution of the processes across different machines.
58
+
For commercial applications, the orchestrator `Kubernetes <https://kubernetes.io/>`__ is well suited, while academic applications are well off choosing `Docker Swarm <https://docs.docker.com/engine/swarm/>`__ as an orchestrator.
59
+
60
+
There are two supportive applications that service providers will habe to operate:
61
+
62
+
* A **Gateway** (also known as an ingress or reverse proxy): This makes the API containers accessible to the EMS and balances the distribution of requests across the API containers. Ideally, it also takes care of encrypting the communication between the client and the service through HTTPS.
63
+
* An **Identity Provider (IdP)**: The IdP issues the JWT tokens to the client software using the OpenID Connect (OIDC) protocol (e.g. the application `Keycloak <https://www.keycloak.org/>`__).
64
+
65
+
Implementation
66
+
==============
67
+
A reference implementation of the design concept can be found in `open source repository <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics>`__. While this implementation is in Python, the concepts are described sufficiently generic to allow for implementation in another programming language, e.g., Java.
68
+
69
+
Inter-process Communication
70
+
---------------------------
71
+
The ESG utilizes an open source python library for distributed task execution. `celery <https://github.com/celery/celery>`__ is here employed for the inter-process communication between API and worker. It is worth noting that celery supports different message brokers, e.g. Redis and RabbitMQ.
72
+
73
+
Worker
74
+
------
75
+
The worker uses celery to interact with the message broker. ESG Framework already provides a generic worker that invokes the service specific forecasting or optimization code in the ESG package.*
76
+
77
+
Garbage Collector
78
+
-----------------
79
+
The garbage collector integrated in the ESG framework is the one provided by celery. Depending on the choice of the message broker no dedicated process is required (`see celery documentation <https://docs.celeryq.dev/en/stable/userguide/configuration.html#result-expires>`__).
80
+
81
+
API
82
+
---
83
+
The framework selected for the implementation of the API is `FastAPI <https://fastapi.tiangolo.com/>`__. FastAPI already provides some functionality out of the box, such as automatic generation of OpenAPI schema documents and SwaggerUI. They can usually be accessed at `localhost:8800 <localhost:8800>`__.
84
+
85
+
The API is fully functional, except for the definition of the data models. Service providers only have to define the input data for the ``POST /request/`` and ``POST /fit-parameters/ `` API methods and the corresponding output data for the ``GET /result/`` API methods. As mentioned above, the ESG package contains building blocks for data models to reduce time spent on defining them (see section :ref:`data_model`). The data format for all data exchange is JSON.
69
86
87
+
The example implementation utilizes `PyJWT <https://pyjwt.readthedocs.io/en/stable/>`__ to verify calls to the API endpoints before using celery to publish tasks to the message broker.
70
88
71
-
Open Source Community
72
-
---------------------
73
-
* possibility of extending the service with different algorithms potentially even unrelated to energy management
89
+
Additional Functionality
90
+
------------------------
91
+
* **Generic Client**: The ESG package comes with a generic client that can be used to trigger calls to services from python source code.
92
+
* **Utility functions**: The ESG packages includes useful utility functions, e.g., for parsing pandas DataFrames from JSON data.
This section demonstrates a simple but fully functional implementation of a photovoltaic (PV) power gerneration forecast service. For more in-depth information about the individual components, please refer to the :doc:`01_concepts` section or the published `research article <https://de.overleaf.com/project/6565c3491f8923df81a997ac>`__.
4
+
5
+
1. Preparation of the forecast or optimization code
To implement a service, the code that is to be the payload of the service needs to be prepared. This code is responsible for the actual forecast or optimization task. In this example, the `pvlib <https://github.com/pvlib/pvlib-python>`__ has been utilized, which computes the PV power production forecast. As mentioned already though, the framework does not impose any restrictions on the code that is wrapped by it, i.e., linear programs, classical statistical models, or fully black-box machine learning approaches are all possible.
8
+
9
+
First the input data needs to be provided to the library. To keep this example simple, only the geographic position, i.e. latitude and longitude, as well as the geometry of the PV system, i.e. azimuth and inclination, and the peak power are selected to describe the PV system. The second part of the required input to compute PV power prediction consists of meteorological forecast data, especially forecasts of solar irradiance.
10
+
11
+
However, this example service is intended to produce PV power generation forecasts for systems for which geometry and peak power values may be unknown and need to be estimated from power production measurements. Therefore, the parameter fitting has been implemented with a simple least squares approach. Although it should be noted that this choice has no particular relevance for the present example. Thus, the input data necessary to obtain a forecast is separated into two groups:
12
+
13
+
* **arguments**: here latitude and longitude
14
+
* **parameters**: here azimuth, inclination, and peak power
15
+
16
+
Finally, it should be considered that it may not be a good choice not to demand all input data as client input. In the present example, the service instead fetches the meteorological data automatically from a third-party web service, which would, in practice, make the interaction with the service more convenient and less error-prone for the client.
17
+
18
+
The actual format of input_data and output_data is implicitly defined in the corresponding data
19
+
models, which are introduced in the following section.
20
+
21
+
Below is an excerpt of the forecast code implemented in this exemplary service. Only the functions handle_request and fit_parameters are shown here, as they are the only part of the service implementation that actually interacts with the forecasting or optimization algorithm. The functions predict_pv_power, fetch_meteo_data as well as fit_with_least_squares have been omitted from the listing, as the practical implementation details of those are not necessary for a developer wanting to implement their service. However, the code of the omitted functions can be found in the repository of the `ESG framework <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics>`__. Implementing fit_parameters is optional and can be omitted for services without fittable parameters.
The code shown above can be found in the file `fooc.py <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics/blob/main/docs/examples/basic_example/source/service/fooc.py>`__.
30
+
31
+
.. _data_model:
32
+
33
+
2. Definition of the data model
34
+
-------------------------------
35
+
The data model is the second component that is service specific and which must thus be defined by the service developer. The data models define the format of the data the client exchanges with the service. For a service without fittable parameters, i.e. a service with ``/request/`` endpoints only, it is sufficient to define the arguments required for computing the request as well as the result of the computation. The corresponding data models are called ``RequestArguments`` and ``RequestOutput``.
36
+
37
+
In the case of a service with fittable parameters, it is additionally necessary to define the data format for the input and output data of the ``/fit-parameters/`` endpoints. The data models specifying the input for the fitting process are referred to as ``FitParameterArguments`` and ``Observations``, and the corresponding output is ``FittedParameters``. As the simple PV power generation forecast service used as an example is designed to provide functionality to fit parameters, it is necessary to define all five data models.
The above code can be found in the `data_model.py <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics/blob/main/docs/examples/basic_example/source/service/data_model.py>`__ file.
55
+
56
+
The ESG package provides ready-to-use **building blocks for data models**, which can be found in the file `metadata.py <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics/blob/main/source/esg/models/metadata.py>`__. For example, in the code above, ``GeographicPosition`` is imported from ESG. ``GeographicPosition`` is a data model itself, which defines that a geographic position consists of latitude and longitude. The data models also serve a documentational purpose as they define the data structure of the downstream application, i.e. the EMS. Additionally, they define permitted ranges for values, which helps in automatically validating the input provided by clients.
57
+
58
+
3. Implementation of the worker component
59
+
-----------------------------------------
60
+
The worker component is responsible for executing the tasks, i.e. computing requests or fitting
61
+
parameters by invoking the forecasting or optimization code, as well as task scheduling. While the ESG framework utilizes the Celery library for implementing the worker, it extends it with functionality to make the implementation of services more convenient, for example by utilizing the data models for de-/serialization of input and output data. Thus, the main objective for implementing a worker is to wire up the data models with the forecasting or optimization code. This is usually a rather simple program, as displayed in the code excerpt below, taken from the PV power generation forecast example service.
The above code can be found in the `api.py <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics/blob/main/docs/examples/basic_example/source/service/worker.py>`__ file.
70
+
71
+
4. Customization of the API component
72
+
-------------------------------------
73
+
The implementation of the API component is available ready-to-use in the ESG framework. However, in order to operate the API it is necessary, similar to the worker, to wire up the API with the other components, in particular with the data model and the worker. Furthermore, some information like name and version number must be provided too. Nevertheless, the necessary code to instantiate an API component is fairly simple as seen in the code excerpt below.
The above code can be found in the `worker.py <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics/blob/main/docs/examples/basic_example/source/service/api.py>`__ file.
80
+
81
+
5. Building docker images to derive functional services
In order for the service developer to derive functional services, they need to build docker images that can be run e.g. on Kubernetes or Docker Swarm. It is necessary to build two distinct images, one for the API (which includes the data model) and one for the worker (which includes the data model and the forecasting or optimization code). The build instructions for both images are implemented as `Dockerfile <https://docs.docker.com/reference/dockerfile/>`__ as seen below.
The above code can be found in the `Dockerfile-API <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics/blob/main/docs/examples/basic_example/source/Dockerfile-API>`__ and the `Dockerfile-worker <https://github.com/fzi-forschungszentrum-informatik/energy-service-generics/blob/main/docs/examples/basic_example/source/Dockerfile-worker>`__ files.
0 commit comments