Mistral HTTP listeners

Registered by Renat Akhmerov

Mistral API should provide the possibility to register event listeners so that a client application could get notifications about events happening with workflow execution and particular tasks.

When registering a listener the following information should be specified:
* Workbook name
* Event types - a list with possible values: "EXECUTION", "TASK"
* URL - a webhook url that should be called on event

Notes from https://etherpad.openstack.org/p/mistral-DZ-RA:
* Web hook event callbacks - do not use task, call from Engine (from on_task_result, outside of transaction scope)
In implementation, introduce type: callback (so that it can be extended to other transports, e.g. AMQP)

From REST API perspective it will be simply:
     callback= {
        events: [on-task-complete, on-execution-complete]
        url: http://bla.com
        method:POST
        headers: {}
        … other stuff to form proper HTTP call, like API tokens, etc ...
    }

Blueprint information

Status:
Not started
Approver:
None
Priority:
Medium
Drafter:
Renat Akhmerov
Direction:
Approved
Assignee:
None
Definition:
Approved
Series goal:
None
Implementation:
Deferred
Milestone target:
None

Related branches

Sprints

Whiteboard

[enykeev 04.03.2014] The purpose of listeners is to inform remote server (using HTTP requests or AMQP) about the events that happens during the workbook execution.

First thing that comes in mind: is listener such different from service action to create additional entity? Its purpose is to send http\amqp requests to remote server, the only difference is that it need to be done either before or after task execution. In general, i see two ways we can deal with that.

1) Listener is just the same as Service and all we need to do is to add on-enter and on-exit callbacks for task.
Then, our DSL would look like this:

  Service:
    Nova:
      ...
    Logger:
      type: HTTP
      parameters:
        baseUrl: {$.LoggerURL}
        actions:
          start:
            parameters:
              url: /log
              method: GET
              query:
                task: {$.execution.task}
                executon: {$.execution.id}
                type: 'enter'
          success:
            parameters:
              url: /log
              method: GET
              query:
                task: {$.execution.task}
                executon: {$.execution.id}
                type: 'done'
          error:
            parameters:
              url: /log
              method: GET
              query:
                task: {$.execution.task}
                executon: {$.execution.id}
                type: 'failed'
                error: {$.execution.error}

This way you could define request logic for every remote API you may have. Workbook may look like that:

  Workbook:
    tasks:
      startLog:
        action: Logger:start
      successLog:
        action: Logger:success
      errorLog:
        action: Logger:error
      task1:
        on-enter: startLog
        action: Nova:some
        on-success:
          - task2
          - successLog
        on-error:
          - task3
          - errorLog

But i would prefer to get rid of additional tasks by pointing on-enter and on-exit directly to actions:

  Workbook:
    tasks:
      task1:
        on-enter: Logger:start
        action: Nova:some
        on-exit:
         - 'Logger:success' : $.execution.error = null
         - 'Logger:error' : $.execution.error != null
        on-success: task2
        on-error: task3

Or even say that tasks should consist only of actions and rewrite it as something like:

  Workbook:
    tasks:
      task1:
        on-enter: Logger:start
        action: Nova:some
        on-exit:
         - 'std:goto(task2)' : $.execution.error = null
         - 'Logger:success' : $.execution.error = null
         - 'std:goto(task3)' : $.execution.error != null
         - 'Logger:error' : $.execution.error != null

We can leave on-success and on-error as a syntactic sugar or decide we want our DSL to be as simple as possible and implement this logic on a model level, though it's probably a question for another topic.

Back to the listeners, what we would need for this one is to implement on-enter and on-exit transitions and a few additional things into our execution context, first of all, $.execution.error, which would be info on the latest error within current execution. Apart from the fact that you would need it to be able to send it to remote server, you would also be able to use it to handle the error later:

  Workflow:
    tasks:
      task1:
        action: Nova:some # here comes the error
        on-exit: task2
      task2:
        action: Nova:else
        on-exit: handleError
      handleError:
        action: std:no-op
        on-exit:
          - 'Logger:success' : $.execution.error = null
          - 'Logger:error' : $.execution.error != null

2) We don't want to pollute DSL with things not directly related to the flow itself and we just want to assign some common type listeners using API.

Another way to solve that one is, i suppose, the one Renat had in mind. In terms of python-mistralclient's CLI it would look something like that:

  $ mistral listener-add {task|execution} id url

So for every time task (or every task of execution) with specified id, executor should send HTTP request to the url with all the params it might need. So, for "http://localhost:9000/log" url it would be something like that:

  http://localhost:9000/log?execution_id=...&task_id=...&event=on-start&...

Though second option seems convenient for some cases (for ones where you have full control over remote endpoint or want a simple solution for debug purposes), i'd say the possibilities here are quite limited and for every additional feature we would have to repeat that we already did for services. At the same time, first one are prone to blow DSL up with insignificant things, dead-end utility tasks that do not affect the flow.

There is actually a way to took both of two worlds by exposing the API methods for manipulation over services and tasks (create new service for existed workbook, add few actions to it, add on-exit for existed task with the pointer on service action we just created) without adding them to DSL, but the questions here are:
 - Why it wasn't done in the first place? I expect workbook-upload-definition to be used to define workbook in a simple convenient way, but not to be the only way to do it. I think the user should be able to manually define every piece of workbook.
 - Though for now we are planing to use on-enter only for notification, it might also produce some checks thus being a kind of replacement for require. Should we wait for on-enter to finish successfully (and thereafter handle its error as a task error) before executing the action, or should we just 'cast' it and call it a day?
 - Should we desynchronize DSL with model in that way? Will there be a reason to keep workflow-get-definition? What should we do with manually defined services and tasks when new DSL would be uploaded and should we allow to upload DSL for existed workbook?
 - Will it be convenient for user to make multiple API calls instead of one? Should we create some kind of sugary listener-add, that will include multiple API calls with some default values included?

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.

Subscribers

No subscribers.