Event Hub failover implementation

EventProcessorHost provides a persistent checkpoint storage in blob for failover. But this is only a convenience for failover not failover itself.
Event Hub does not have similar concept as Message Queue paired namespaces for queue redundancy (failover). How would one implement Event Hub redundancy easily?

hi - thanks for your suggestion, but adding "UserProperties" JSON array to the content of the message does not seem to work. When the message is received through IEventProcessor.ProcessEventsAsync()
the Properties dictionary on the EventData instance is still empty.
Please let me clarify what I am trying to achieve here.
I have a plain HTTP client (C++ client) sending messages to the Event Hub via HTTPS. I am trying to achieve similar to
this as I would in a C# client with the service bus client library for event hub:
SomeEventBody body = new SomeEventBody { SomeData = 100 };
EventData data = new EventData(body, serializer) //Object and serializer
// *** I WANT TO SET PROPERTIES ON THE EVENTDATA LIKE THIS ***
data.Properties.Add("Type", "Telemetry_" + DateTime.Now.ToLongTimeString());
await client.SendAsync(data); // Send single message async
When this message is received at the event hub processor, I am able to access EventData.Properties and use the "Type" property in the dictionary.
I want to be able to set the same "Type" property when I send this message from a plain HTTP client, and when the message is received by the event processor I want to be able to read the value out of the dictionary in the same way. I can't though
- because EventData.Properties is always just an empty collection

Similar Messages

  • Most simple query on Event Hub stream (json) constantly gives Data Conversion Errors

    Hello all,
    Been playing with ASA in December and didn't have any issues, my queries kept working and outputted the data as needed.  However, since January, I created a new demo, where I now constantly get Data Conversion errors.  The scenario is described
    below, but I have the following questions:
    Where can I get detailed information on the data conversion errors?  I don't get any point now (not in the operation logs and not in the table storage of my diagnostic storage account)
    What could be wrong in my scenario and could be causing these issues
    The scenario I have implemented is the following:
    My local devices send EventData objects, serialized through Json.Net to an Event Hub with 32 partitions.
    I define my query input as Event Hub Stream and define the data as json/utf8.  I give it the name TelemetryReadings
    Then I write my query as SELECT * FROM TelemetryReadings
    In the output, I create an output on blob with CSV/UTF8 encoding
    After that, I start the job
    The result is an empty blob container (no output written) and tons of data conversion errors in the monitoring graph.  What should I do to get this solved?
    Thanks
    Sam Vanhoutte - CTO Codit - VTS-P BizTalk - Windows Azure Integration: www.integrationcloud.eu

    So, apparently the issue was related to the incoming objects, I had.  I was sending unsupported data types (boolean and Dictionary).  I changed my code to remove these from the json and that worked out well.  There was a change that got deployed
    that (instead of marking the unsupported fields as null, they were throwing an exception).  That's why things worked earlier.
    So, it had to do with the limitation that I mentioned in my earlier comment:
    https://github.com/Azure/azure-content/blob/master/articles/stream-analytics-limitations.md
    Unsupported type conversions result in NULL values
    Any event vales with type conversions not supported in the Data Types section of Azure Stream Analytics Query Language
    Reference will result in a NULL value. In this preview release no error logging is in place for these conversion exceptions.
    I am creating a blog post on this one
    Sam Vanhoutte - CTO Codit - VTS-P BizTalk - Windows Azure Integration: www.integrationcloud.eu

  • How to use peeklock/delete pattern to handle messages in event hub?

    Hi,
    I have a consumer group with multiple consumers. If one consumer failed to process the message (by some internal reasons. not poison), I want it just leave that message and let other consumers have chance to process it again. previous I use service bus queue/topic,
    the peeklock/delete pattern with the dead-letter queue can easily achieve this. Now by some reason I switch event hub,
    Is there still some way to achieve this? what is the recommended way to handle this in event hub?
    thanks!
    Robin

    There's no lock and delete concept in EventHubs. Please think EH as a stream of events where you can rewind back in time and start reading from wherever you want. For example as in your case if one of the clients fail to process an event any other client
    can start reading from that position where first client failed. Let me know if you have additional questions.
    I suggest you to read about EventProcessorHosts for EventHubs. This is probably what you're looking for.
    http://azure.microsoft.com/en-us/documentation/articles/service-bus-event-hubs-csharp-ephcs-getstarted/

  • Event Hubs EventProcessingHost and partitioning over worker roles

    Hi,
    I have a question after reading the programming guide here:
    https://msdn.microsoft.com/en-us/library/azure/dn789972.aspx
    My understanding is that if I'm using multiple worker roles on the same event hub, the EventProcessorHost guarantees that a single machine is processing a single partition at any given time. If a checkpoint is made, then the partition might be moved to be
    processed on another machine, but in-between checkpoints, all the work for a particular node is done on the same machine.
    Is my understanding correct?
    Thanks,
    Andy 

    Hi Andy,
    Your understanding is partially correct. EPH (event processor host) provides a load balancing context while consuming events from multiple partitions of an event hub. I need to highlight some points below for addressing your possible questions:
    * Single host (you called this machine, actually you can run multiple hosts in a single process or i.e. on a single machine) can process multiple partitions.
    * Processing a partition at any given time on only one of the hosts is a best effort. There might be situations where 2 hosts end up processing the same event of the same partition when partitions move from one host to another. Please remember that EventHub
    doesn't provide lock and delete context, it provides more like a stream instead.
    * And also remember that checkpointing is mainly for failing over like in the scenarios as one host crashes/restarts or cannot function as expected so that new host can pick up the partition(s) from the mal-functioned host and continue from wherever it left. 
    Let me know if you have any questions.

  • JMS Failover Implementation With Cluster Consist Of Four Servers

    Hi All,
    I mistankly posted the following thread in a WebLogic general area. It should be here. Can any one help please.
    [Document for JMS Failover Implementation On WebLogic|http://forums.oracle.com/forums/thread.jspa?threadID=900391&tstart=15]
    Could you please just reply here.
    Thanks :)

    Hi All,
    I mistankly posted the following thread in a WebLogic general area. It should be here. Can any one help please.
    [Document for JMS Failover Implementation On WebLogic|http://forums.oracle.com/forums/thread.jspa?threadID=900391&tstart=15]
    Could you please just reply here.
    Thanks :)

  • Event Hub "publisher" endpoint and token : is it only valid for HTTP REST APIs ?

    Hello,
    as explained here
    http://msdn.microsoft.com/en-us/library/azure/dn789974.aspx I can create a SAS rule to allow devices to send only to an event hub.
    Now I can use this rule and the related connection string to create an EventHubClient object who is capable to send only to the event hub. In this case I can't have fine grained control on devices ... I mean, all devices created by Azure SDK APIs with this
    SAS rule are capable only to send and If I want to block some devices, I can't ... removing the rule I'll block all devices.
    For fine grained control there is the publisher concept (as an endpoint on the event hub).
    Starting from the same SAS rule (for sending only), I can generate more tokens and give each of the to the devices. In this case, without removing the SAS rule I can block single device revoking the token.
    Now ... reading on the Event Hub documentation I can't find any API to send data using publisher concept (and using token).
    See also AMQP.Net Lite documentation (http://amqpnetlite.codeplex.com/wikipage?title=Using%20Amqp.Net%20Lite%20with%20Azure%20Server%20Bus%20Event%20Hub&referringTitle=Documentation) ...
    I can send with publisher concept but without specifying token, only SAS rule (key name and shared key).
    My understanding is that the token is useful only with HTTP REST APIs (put the toke in the Authorization header) ... is it right ?
    Thanks,
    Paolo
    Paolo Patierno

    Hi Paolo,
    Yes, SAS token is for HTTP Post. This method need this token to security the channel. It is different with AMQP 1.1.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Sending data to an Azure Event Hub from an iOS device

    I have been unable to find a library that will allow an iOS application to communicate with an Azure Event Hub.  Preferably, the library should use the AMQP 1.0 protocol -- since Microsoft recommends it when sending large amounts of data.
    Can someone point me to a library?  
    Thanks

    There's no out of box solution for EH on iOS for now. Please consider using REST for sending events.

  • Can we use a scheduler to automatically adjust event hub throughput units/

    Hi there,
    I want to use event hub as our solutions.  We have millions of event publishers and consumers.  The problem is, we do have peek time which is from 8:00 a.m. - 17:00p.m.  At night we may only have few messages.  I think event hub bills
    against throughput units, so i want to buy more units before 8:00a.m. and reduce the amount at evening.  Instead of azure portal, I want to set up a scheduler to help me do this automatically.  I didn't find anything on web to tell me how to achieve
    this goal.
    Please help and advise.
    Thanks,
    Frederick

    No. AQ is not supported by Oracle Streams. User defined and Sys.AnyData are not supported types.
    You can create AQ propagation process from source to backup site. But you will need to dequeue both sites simultaniously.
    Or you can create schadow table (

  • Event Hubs with IoT device and HTTPS

    I am currently working with setting up an Azure IoT demo showcasing the power of Event Hubs and a few other technologies. The devices I am working with use very basic WiFi chips that run off AT commands. The Wifi chip and supporting microcontrollers do
    not support SSL. Is there any way to upload events to event hub without SSL?

    Azure Service Bus doesn't allow unsecure traffic. One solution might be to build your own bridge/proxy such as a web service where your devices talks to it and this service pushes events to Azure EventHubs.

  • Event Hubs sporadic exceptions

    Hello all,
    I hope someone can give me a hint on where to look for solutions for these errors we're encountering sporadically when sending messages to Event Hubs from a series of web roles. This is a setup with 32 partitions which is getting about 700 messages per second.
    If you have any ideas about what these are or what can be done to avoid losing any messages I'd be grateful to read your answer.
    From what I've read in the documentation I'm assuming internally the SDK is using the default retry configuration and these exceptions should be happening only after the retries have failed.
    These are the exceptions:
    System.TimeoutException: The operation did not complete within the allocated time 00:01:09.9573007 for object requestresponseamqplinkXXXXX.
    Microsoft.ServiceBus.Messaging.MessagingException: The AMQP object sessionXXXXX is aborted.
    The XXXXX are numbers, I've removed because I don't even know what it means or if it can be insecure to share it publicly.
    The code failing eventually is calling the EventHubClient.SendAsync(EventData data) method.
    Thank you very much!
    Ezequiel

    I'm currently using MessagingFactory.CreateFromConnectionString(connectionString) to create the MessagingFactory object and don't see an overload to setup the MessagingSettings. Could you provide me with an example of how I can replace that to setup the
    OperationTimeout?
    So let me know if I understood correctly. Even if there was a transient error, if any of the retries succeeds before the timeout, no exception will be thrown, correct?
    If that's the case, this answer from SO which quotes you, makes the subject confusing:
    http://stackoverflow.com/questions/26875625/azure-service-bus-transient-errors-exceptions-received-through-the-message-pu
    We're using a remote DC because our web roles were already at north central and event hubs is not available there yet so we had to pick south central for now.
    Thank you very much,
    Ezequiel

  • Events HUB and Python

    Hello,
    Does Events HUB works with Python?
    I saw in documentation only example for .NET, Java and C. 

    Hi,
    The Microsoft Azure Python SDK provides a set of Python packages for easy access to Azure storage services, service bus queues, topics and the service management APIs. Unfortunately there is no support for Event Hubs at this stage yet. Luckily Microsoft
    is embracing the open source community these days and is hosting the Python SDK on GitHub for public contribution, so hopefully this will be added soon.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Accessing Sender parameter attribute in event hander method implementation

    Hello knowledgeable friends.
    I would like a single event handler to manage two alv grid objects.  There is a parameter SENDER that is available in the method implementations to say which object triggered the event.  I would like to get at the attribute MT_OUTTAB but this syntax does not work:
    local_variable = sender->mt_outtab
    Any help would be greatly appreciated

    Ok, MT_OUTTAB is a protected Attribute.  I would settle for just the name of the Sender.  This code checks:
        call method sender->get_name RECEIVING name = l_name.
    but l_name is empty.  I was hoping for 'GRID1'; when I created the object I used:
        CREATE OBJECT alvgrid
          EXPORTING
            i_parent = container_top
            i_name = 'GRID1'.

  • Receiving events from event hub was blocked.

    In our Cloud Service project, we have 2 instances for work role (deploy to Azure), the work role is consume events from the EventHub using EventProcessorHost(host name is RoleInstance name).
    For sending events:
        var
    client = EventHubClient.CreateFromConnectionString(serviceBusConnectionString,
    hubName);
    while (true)
    var eventData =
    new
    EventData(Encoding.UTF8.GetBytes("test"))
    {PartitionKey = "key"};
                        eventData.Properties.Add("time",
    DateTime.UtcNow);
                    client.SendAsync(eventData).Wait();
                    Thread.Sleep(50);
    Each 50ms, we send one event (event1, 2,3 …….);
    For receiving data:    
     public
    async
    Task ProcessEventsAsync(PartitionContext
    context, IEnumerable<EventData>
    events)
                //when
    we get the event, so we can view the log
    Trace.WriteLine(“got events”);
    foreach (var
    eventData in events)
                    // handle the event
    Task.Delay(12000).Wait();
    await ContextCheckpointAsync(context);
    We add the
    delay for event operation.
    It seems that we cannot receive data in time from the log, seems event6 was blocked for the Event5 delay, after the 12ms, we can receive event6 from the EventHub, and the Event6
    delay is 40s(from the log, we send event6 to Hub at 35:10, but we get from Hub at 35:50),
    So I wonder to know the maximum number of threads are working on processing fot the EventProcessorHost? Depends on the Partitions?
    And is there any way to receiving events in time?

    Hi Jordan
    Since Task.Delay call blocks the callback, host won't hand over new events until you're done with the current batch. This is due to order guarantee of the events delivered, i.e. host should process the events in order from the same partition.
    If event process is taking so long then you should consider to move process job into a separate thread so host can deliver new batch of events while thread is working on the previous batch.

  • Registry for event from cid-implementating component failed

    hi,
    I have a problem with the following scenario:
    - I have a component Main, component interface definitions (cid´s) L and MENU
    - component M1 and L1 implement the cid
    now my problem:
    Main embedds the interface view of cid L (in my case the component L1 at runtime) and L1 embedds the interface view of cid Menu (M1 at runtime). both embedding of course in a view container.
    In M1 I got a tree ui-element and when a leaf in the tree is clicked a event in M1 is fired. This event is defined in the cid Menu and Main has registered for it with an eventhandler method.
    when I start my application the event is fired in M1 but Main-component didn´t execute its eventhandler method for this event. what could be the reason?
    has anyone an idea?

    no messias outhere? :-D

  • Document for JMS Failover Implementation On WebLogic

    Hi,
    I am looking some good links and techniques to implement JMS failover using WebLogic 10.3.
    FailOver* [As we do with our Databases (Concept of Clustring)]
    System will consist of two app servers and each will have its own application deployments but if one failed for some reason the application messages should redirected to the other servere and vice versa.
    Above efinition is very brief but if anyone can help provide some good documents and info how to implement it it will be appriciated.
    Thanks :-)

    Thanks alot guys for your help. We successfully implemented it at our servers here by creating distributed queues targetting all servers in a cluster.
    One point which I think is worth mentioning and I want to share with all us here is that; when App Server [where MDB will post the message finally after retrieving from queue] if that goes down what will happen, what MDB will do with that message?.
    We impleneted the DLQ (error destination) and deploy one more MDB_DLQ_SERVER2 (Let say App SERVER 1 is down) which gets triggered when any message comes to DLQ and post that message to some other App Server, Let say message has been read by MDB_SERVER1 on SERVER1 but offcourse actaull server is down so message should get Re-directed to its Error Destination after it expiration peiod or whatever the settings are. DLQ (Error Destination) which is also a distributed destinatrion again targetting all servers in cluster same as actaull Request or Reply queues BUT MDB_DLQ_SERVER2 which is deployed on Server2 is NOT able to read this message. It get triggered but can not access the message.
    After debugging for almost a day we found out its because message has been transafed to DLQ but actaully its resides in a FILESTORE_SERVER1 and MDB_DLQ_SERVER2 is not able to access it.
    To work with that we have to define MDB_DLQ_SERVER1 to cater the SERVER1 failure and MDB_DLQ_SERVER2 to cater SERVER2 failure.
    Reason I am mentioning this because as I said DLQ is also a normal Distributed Queue but at the same time its NOT as Distributed as its says.
    Hope you all understand what I just wrote above.
    Now I need to implement exactly the same scenario using four seperate physicall machine containing my four servers. I tried this scenario by creating four machines where node manager for each server is running and listning but when I am trying to start the server it gives me Certificate Exception with bad user name and password. Anyway I have seen some posts here regarding this; So i think i'll be fine.
    Thanks Again,
    Sheeraz

Maybe you are looking for

  • Follow up to frozen mac... strange colours on screen?

    hi everybody.... my screen has just had a funny turn. red and electric blue appeared replacing some of the colours of the images i was looking at... (black being replaced in part) web page, then screen saver, then iphoto... checked display in prefere

  • Process Chain Master Data Failed.Showing Entire chain has status R

    Hi, Everyday SDMasterChain is running successfully. Today one of the localChain or subchain  has failed. I have noticed that it has failed because last delta for one infoPackage has not yet completed and chain showing status "Entire chain now has sta

  • Fund Flow Report

    Can we generate the Fund Flow report in SAP B1. Fund Flow Analysis being a Management Accounting tool, one of our SAP B1 client  is looking for this report.

  • LabelGraphics Script

    Hi Everyone, I don't know much about scripting so I'm hoping someone can help me. I'm trying to modify the LabelGraphics script to capture the "Date Created" field. I have tons of images that I need to organize based on their date of creation but I n

  • Photoshop cs3 extended asking for serial number and activation after having the program for over 5 years now

    for some reason the program out of the blue is asking for a serial number and activation.  I can't find the case anymore.  does anyone have a way I can contact adobe directly to see if they can help me solve this problem.  Everywhere I look points me