Unclear handling of sequence data

Hi guys,
here I have yet another bunch of questions in regard to the handling of sequence data.
I have checked and compared all of the examples from the SDK in various versions, and all of them seem to do things differently, and the SDK documentation does not really shed any way on what is the expected or correct behaviour.
This applies especially to disposing and copying sequence data pointers, which can result in big problems concerning data leaks or access violations if done incorrectly, I suppose.
I tried to code a simple example that uses a very simple SequenceData (a struct only containing one int, see below), so I won't have to cope with flattening/unflattening.
In the snippet below, I implemented SequenceSetup(), SequenceSetdown() and SequenceResetup() and marked 6 parts with questions where the documentation and/or implementation in the examples is inconsistent.
It would be great if anyone with more insight could give some reliable answers on what you are expected to do there. :-)
Thanks,
Toby
struct SequenceData
    int param;
static PF_Err SequenceSetup(PF_InData* in_data, PF_OutData* out_data)
    AEGP_SuiteHandler suites(in_data->pica_basicP);
    // Q1: Are we allowed (or required) to delete the input sequence data if it exists ???
    if (in_data->sequence_data)
        suites.HandleSuite1()->host_dispose_handle(in_data->sequence_data);
    // Q2: Are we allowed (or required) to delete the output sequence data if it exists ???
    if (out_data->sequence_data)
        suites.HandleSuite1()->host_dispose_handle(out_data->sequence_data);
    PF_Handle outH = suites.HandleSuite1()->host_new_handle(sizeof(SequenceData));
    if (!outH) return PF_Err_OUT_OF_MEMORY;
    SequenceData* outP = static_cast<SequenceData*>(suites.HandleSuite1()->host_lock_handle(outH));
    if (outP)
        AEFX_CLR_STRUCT(*outP);
        outP->param = 0;
        out_data->sequence_data = outH;
        // Q3: Do we really NOT have to set flat_sdata_size ???
        // (according to the spec, it is unused, but still some samples set it)
        out_data->flat_sdata_size = sizeof(SequenceData);
        suites.HandleSuite1()->host_unlock_handle(outH);
    if (!out_data->sequence_data) return PF_Err_INTERNAL_STRUCT_DAMAGED;
    return PF_Err_NONE;
static PF_Err SequenceSetdown(PF_InData* in_data, PF_OutData* out_data)
    AEGP_SuiteHandler suites(in_data->pica_basicP);
    if (in_data->sequence_data)
        suites.HandleSuite1()->host_dispose_handle(in_data->sequence_data);
    // Q4: Are we required to set both in_data and out_data sequence_data pointers to NULL ???
    in_data->sequence_data = NULL;
    out_data->sequence_data = NULL;
    return PF_Err_NONE;
static PF_Err SequenceResetup(PF_InData* in_data, PF_OutData* out_data)
    AEGP_SuiteHandler suites(in_data->pica_basicP);
    PF_Handle outH = suites.HandleSuite1()->host_new_handle(sizeof(SequenceData));
    if (!outH) return PF_Err_OUT_OF_MEMORY;
    SequenceData* outP = static_cast<SequenceData*>(suites.HandleSuite1()->host_lock_handle(outH));
    if (outP)
        AEFX_CLR_STRUCT(*outP);
        if (in_data->sequence_data)
            SequenceData* inP = static_cast<SequenceData*>(DH(in_data->sequence_data));
            if (inP)
                outP->param = inP->param;
            // Q5: Are we allowed (or required) to delete the input sequence data if it exists ???
            suites.HandleSuite1()->host_dispose_handle(in_data->sequence_data);
        // Q6: Are we allowed (or required) to delete the output sequence data if it exists ???
        if (out_data->sequence_data) suites.HandleSuite1()->host_dispose_handle(out_data->sequence_data);
        out_data->sequence_data = outH;
        suites.HandleSuite1()->host_unlock_handle(outH);
    if (!out_data->sequence_data) return PF_Err_INTERNAL_STRUCT_DAMAGED;
    return PF_Err_NONE;

To answer my own questions:
Q1: in_data->sequence_data always seems to be NULL in this case, so no action necessary, but does no harm if it's in there
Q2: out_data->sequence_data always seems to be NULL in this case, so no action necessary, but does no harm if it's in there
Q3: out_data->flat_sdata_size is obsolete and should not be used
Q4: seems to do no harm and since sequence data is disposed anyway at that location, should be left in there
Q5: yes, this handle needs to be disposed here!
Q6: no, that is not necessary and even problematic (as it is the same as in_data->sequence data when the function is called)
Here is result of my research concerning sequence data from the last few days: http://reduxfx.com/ae_seqdata.pdf
Cheers,
Toby

Similar Messages

  • How to handle multiple tables data in Entity Beans?

    How to handle multiple tables data in Entity Beans?
    i mean, my bean (non trivial) is responsible for frequent
    insertion in one table and some deletion on another table.
    Can anyone of you...please..?

    Is your data model right? If you are adding in one and deleting in another it sounds to me more like a process that an entity, in which case you may revisit your data model and simplify it, add in a session bean with the process method to co-ordinate between the two.
    However, if you want to map multiple different tables within a single entity bean it is possible and just part of the mapping. How you actualyl specify it depends on which implementation you are working with.
    Cheers,
    Peter.

  • How do you create a column of sequenced dates in Numbers

    How do you create a column of sequenced dates in Numbers without typing in each date? For example: 01/05/15, 01/12/15, 01/19/15, 01/26/15, 02/02/15, etc.

    Hi Cha Ling,
    Another way,
    Enter your first two dates that show the desired interval- i.e. 01/05/15 and 01/12/15.
    Select both cells and choose fill from the contextual menu.
    Drag down to fill your column.
    quinn

  • Handling dynamic item data in VSTS

    Hi Team,
    Can you please let us know how to handle dynamic item data in VSTS.
    Regards
    Raghavendra
    8105577088

    Hi Raghavendra,
    Based on your previous threads posted by you in the test forum, I doubt that you want to create the coded UI test, am I right?
    If so, which kind of app did you want to test, WPF or others? Could you share us a screen shot about the real UI controls you want to test? What real Controls you want to test, list item controls or others?
    You know that to find a control in coded UI test, we have to use the unique peoperties as the search properties, so if your controls are dynamic, the real issue would be related to the search properties you use in your code.
    Like this case:
    https://social.msdn.microsoft.com/Forums/en-US/4c4805f0-230d-459d-a3e5-61e62746c5b9/list-box-item-is-not-get-selected-while-play-back-the-recorded-script-in-coded-ui-test?forum=vsautotest
    As Pankaj's suggestion, if the list items are dynamic then you can use the index values instead of the innertext, so the real issue would be related to how to use the search perperties in your side. If you item text value is dynamic, you'd better not
    use the text value as the search properties. 
    About "How does “Coded UI test” finds a control ??", reference:
    http://blogs.msdn.com/b/balagans/archive/2009/12/28/9941582.aspx
    If I have misunderstood this issue, please feel free to let me know.
    Best Regards,
    Jack
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Looking for Rental add-on that can handle flexible return dates

    Hi Forum,
    I am wondering which would be good add-on for rental industry that can handle flexible return dates (without cancelling original contract and creating a new one with actual date).
    I am wondering if Visnova's Rental add-on would handle flexible return dates without actually cancelling original contract and creating a new contract.
    Are there many ways to handle flexible rental return dates?
    Thanks.

    Hi,
    I have moved your thread here because you are looking for partner add-on instead of SAP add-on. Have you searched through this forum and SAP EcoHub ?
    Thanks,
    Gordon

  • Coherence cannot handle year in dates of 9999 ?

    Hi,
    I have been running into an issue with dates in coherence. We have some data which specifies a date of 01/01/9999 to indicate that the date does not expire.
    However this results in errors when loading into coherence as follows:-
    java.io.IOException: year is likely out of range: 9999
    I then found this related thread:-
    Storing large DateTime values from .NET
    Is it really true that coherence cannot handle a valid date ???? Why would you possibly validate a year in the first place given 9999 is actually a valid year !.
    TIA
    Martin

    Hi,
    What is the code that causes the error? What version of Coherence are you using?
    I can serialize/deserialize the date you have without seeing any problems. For example...
    Calendar c = Calendar.getInstance();
    c.set(9999, Calendar.JANUARY, 1);
    Date d = c.getTime();
    ConfigurablePofContext pofContext = new ConfigurablePofContext("coherence-pof-config.xml");
    Binary b = ExternalizableHelper.toBinary(d, pofContext);
    Date d2 = (Date) ExternalizableHelper.fromBinary(b, pofContext);
    System.out.println(d2);The above code works fine when I ran it with Coherence 3.7.1.6
    If I have a class that implements PortableObject with a date field set to 1/1/9999 this also serializes without any problems.
    JK

  • How to handle errors in data templates

    Hi
    What is the recommended way to handle errors for example if one of your SQL statement in a data template returned no data how and where would you be able to create an error message for the user to find and read.
    Thanks,
    Mark

    The closest I have come to doing this is to put conditional statements into the format template. If a value matches an expected (ex. is null) you can return a message (in the report) via the format template (ex. "No Data Found").
    I am not sure this really answers your question as this is in the format template, but I generally view them as a matched pair that work together. I try to stick with data extraction in the data define, and do all my conditional stuff in the format templates.
    Scott

  • How to handle custom component data on overviewset save button CRM UI

    Hi,
    I have added a custom component to a standard view which is enchanted.
    I can handle any data with my buttons on the component but after editing data
    i need the save the data when the save button on the overview(top) is pressed.
    I have redefined save button of overview but i cant get my data.
    My node name is Root. I think i couldnt bind it to overview.
    How can i do that?
    Thank you

    Probably it can be done by
    http://wiki.sdn.sap.com/wiki/display/CRM/CRMWebUITechnical-CreatingTableViewInWebUI
    i am trying
    Thank you

  • Exception Handling for OPEN DATA SET and CLOSE DATA SET

    Hi ppl,
    Can you please let me know what are the exceptions that can be handled for open, read, transfer and close data set ?
    Many Thanks.

    HI,
    try this way....
      DO.
        TRY.
        READ DATASET filename INTO datatab.
          CATCH cx_sy_conversion_codepage cx_sy_codepage_converter_init
                cx_sy_file_authority cx_sy_file_io cx_sy_file_open .
        ENDTRY.
    READ DATASET filename INTO datatab.
    End of changes CHRK941728
        IF sy-subrc NE 0.
          EXIT.
        ELSE.
          APPEND datatab.
        ENDIF.
      ENDDO.

  • How to handle Keys (Sequences) generated in Database

    Need some pointers about how to achieve following:
    I am building a cache for TRADES and have a CacheStore supporting LOAD/STORE/REMOVE methods.
    The TRADES cache is associated with TRADE table on the database. I have most of my business logic related to adding a trade in a Database Package.
    If a new trade is being added to the table, Stored Procedure in my package will use a sequence number to assign it a UNIQUE KEY and will return the key with the call
    Now, my cache is using TRADE KEY as a key for the trade objects. If my application is putting a new trade to the cache, then how should this be designed?
    I do not really want to have sequences put in the cache as my stored procedure does it for me and there are few other applications which are using same SP.
    Has anyone implemented something like this where the key is now known until the object is entered in the database?
    I can have a TEMPORARY KEY assigned to the object and can replace the record once the trade has been successfully added to the table. As I can not do replace/update cache records in CacheStore, is there any way to achieve this?
    Following is my cache-config:
    <distributed-scheme>
         <scheme-name>PartitionedTradePOFScheme</scheme-name>
         <service-name>SweetTradeSvc</service-name>
         <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
              <init-params>
                   <init-param>
                        <param-type>String</param-type>
                        <param-value>position-pof-config.xml</param-value>
                   </init-param>
              </init-params>
         </serializer>
         <backing-map-scheme>
              <read-write-backing-map-scheme>
                   <internal-cache-scheme>
                        <local-scheme></local-scheme>
                   </internal-cache-scheme>
                        <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.db.spg.sweet.coherenceutil.DBCacheStoreTrade</class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
              </read-write-backing-map-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
    </distributed-scheme>I am trying to be as clear as possible. If anything here is confusing, let me know.
    Thanks

    Jonathan.Knight wrote:
    <li>The AQ messaging is asynchronous so there will be a time-lag between the original put of the trade and it being updated with the version from the DB.
    <li>How does the AQ listener know what the key is of the temporary trade is that it needs to remove, presumably you write this to the DB then send it back in the AQ message?Agreed. From my CacheStore, I call Stored Procedure inside Database Package. I can pass this TEMP key to this SP as 1 extra parameter. So, I do not necessarily need to store it in the database, but just use merely as an extra parameter to be sent back to AQ. Also, when the client puts a new trade, the business logic for most of the trade processing is in our stored procedures. There, it will enrich that trade with additional information from other tables, it will also adjust the trader position and then will return with the new trade key to the client. GUI/Client Applications will provide the data required to create a trade. But SP will use this data and using other local stored procedures, it will obtain extra information for the trade, enrich all the information in a single trade record and then will create that new record.
    <li>If the code the listens to the AQ messages removes the original version of the trade from the cache before putting the new version into the cache you will have a short period of time where there is that trade does not exist in the cache
    <li>Alternatively if your AQ listener works the other way round and puts the new version into the cache before removing the original version you will have a short period where there are two copies of the same trade in the cache.
    <li>Depending on what you use-cases for reading these trades back out of the cache you could have issues with the above times when there may be one, zero or two copies of the trade.Yes. I have been thinking about that as well. But I think, as the trade entered from GUI is merely an indicator for the background/database processing. If and only if the database processing is successful, that is considered a valid trade. So as far as I know, the duplicate/old copy of the trade would still be considered as in "waiting database update" state. I am designing my GUI accordingly. I will probably check with the users about this requirement once again though. You have a valid point.
    <li>I assume your use-cases for accessing trades from the cache are not using the keys as they could not rely on having obtained the correct key for a trade as it might change. At this point, I am using these TRADE keys as my cache keys. Because, it is really easier for us to manage TRADES in the cache, especially when there are lot of lookups involved from our users
    One more thing someone I had worked with long time back suggested that, having a staging/metadata cache for all these trades to be processed from the client. Add these trades in the database and then in QueueListener, add a new trade in Trade-Cache and remove the corresponding old trade from staging/metadata cache. This also seems good, though I need to see if that is going to complicate matters in terms of processing.
    Thanks a lot JK for your suggestions. Let me know if the logic I explained makes sence.

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • Should Not handle large base64Binary data with BPEL?

    Hi,
    we need to implement a file saving function. I have no problem to implement the web service with Java class by using MTOM streaming but I question on the best design with BPEL for this or if BPEL should not be used for this at all. Please help.
    For the requirement, the file content could be the text entered from a web page or the binary data from any resource such as an existing file or email message body etc, which is not limited. Also the web service would receive the desired file name. But the actual file name should be created by the web service based on the desired file name plus some business rule.
    I am thinking of creating a BPEL app for this. The input for the file content is designed to be of type base64Binary so that the application could handle either ASCII or Binary data. In this BEPL app, it needs first to call a web service to get the information where to put the file (this is dynamic) and generate the actual file name and then it calls another web service to save the file with the actual file name and content. I wonder in the case of saving content of big size such as content read from a PDF file, it could cause resource issue due to the dehydration in BPEL. I am not so clear about dehydration. Does that mean when the BPEL invokes the 1st web service to get the information where to put the file, the base64Binary data for the file content would be first saved into the DB (dehydrated)? Would this cause issue? If so, does that mean for this business needs, we should not use SOA, instead, we should just implement it with JAX-WS?

    Operating System is Windows 7
    I do not know what you mean by Patched to 7.0.4
    I do not have a crash report, the software just freezes and I have to do a force close on the program.
    Thank you for your time...

  • How to handle multiple fields data

    Hi All,
    My data is :
    FldName           FldTypeCode          Text
    Sandya            02                         nothing
    Raj                 01                         12/Oct/2008
    Lokesh            03                          12546
    Harish             04                          12565.35
    King                01                          12/Nov/2007
    Cobra              02                          texttype In the UI I have give with refcursor all three fields data. now from UI to DB he is passing all values to update at a time.
    Q) Now My question is How to update all the values.
    Eg: Create procedure procdname (ip_allvalues in typerecord)
    is
    begin
    --statements;
    end;
    the above procedure input parameter can handle all three fields data when they pass,
    1) If the parameter handles, which type i need to create ?
    2) How i need to update multiple records at a time?
    Please can any body...
    Thanks in advance..

    sanjuv wrote:
    In the UI I have give with refcursor all three fields data. now from UI to DB he is passing all values to update at a time.
    Q) Now My question is How to update all the values.With an update statement. Identify the row with a where condition and then set the new values. That's one of the first reasons why primary keys are used.
    1) If the parameter handles, which type i need to create ?
    2) How i need to update multiple records at a time? If you don't explain how you User Interface is designed to work nobody can tell you anything about it. Anyway for such things there is no need to define any type.
    Usually the application should submit an update command(like the following one) for each row to update.
    update <table_name>
         set field_1= <value1>,
              field_2= <value2>
              field_n= <valuen>
    where <primary key column> = <primary key value of the row to update>Bye Alessandro
    Edited by: Alessandro Rossi on 6-ott-2008 16.37

  • Error handling for master data with direct update

    Hi guys,
    For master data with flexible update, error handling can be defined in InfoPackege, and if the load is performed via PSA there are several options - clear so far. But what about direct update...
    But my specific question is: If an erroneous record (e.g invalid characters) occur in a master data load using direct update, this will set the request to red. But what does this mean in terms of what happens to the other records of the request (which are correct) are they written to the master data tables, so that they can be present once the masterdata is activated, or are nothing written to masterdata tables if a single record is erroneous???
    Many thanks,
    / Christian

    Hi Christian -
    Difference between flexible upload & Direct upload is that direct upload does not have Update Rules, direct upload will have PSA as usual & you can do testing in PSA.
    second part when you load master data - if error occurs all the records for that request no will be status error so activation will not have any impact on it i.e. no new records from failed load will be available.
    hope it helps
    regards
    Vikash

  • Handling DB sequences for create and commit actions on ADF tables

    Hello,
    Most of the tables will have sequences used for primary keys and there will be columns like created_On and created_By columns. But my table does not require user inputs for these columns. So when commit button is clicked on the table how to handle these columns. The commit action should refer the sequence and enter SYSDATE and username for created_on and created_by columns.
    what is the workaround for this ?
    Thanks.

    Use groovy expressions as default values for the EO fields - adf.currentDate and adf.context.securityContext.getUserName()
    For the sequence see the ADF documentation:
    http://docs.oracle.com/cd/E16162_01/web.1112/e16182/bcentities.htm#sm0147

Maybe you are looking for

  • Proforma invoice not created for stock transfer order

    Hi Experts while creating a proforma invoice for stock transfer order between the plants we are not able to create proforma invoice , the process is replenishment delivery is created and Wm is also done and the billing with reference to delivery and

  • Acrobat Pro 9 in OS X 10.6.7 won't print

    I am trying to print with my Epson Stylus R280 printer with Acrobat, and it is really strange. I get the print dialogue and it acts like it is going to print, but instead of printing the printer just ejects a blank page for every page I am trying to

  • Samsung DLP TV - HELP !!! TECHNICAL PROBLEMS

    We have a 6 yr old Samsung HL-p4663W. Symtoms: No picture, lamp light blinking Every 30 sec or we would hear a click like the ballast was firing up or something. We first replaced the lamp- no change in symtpoms. Someone told us given the clicking, i

  • GR price displaying zero

    hi gurus,                 we did the GR AFTER PO BUT THE PRICE IS 40 USD in po, DISPLAYING ZERO AMOUNT   IN GR Please give me the solution for this problem Regards SAP MM

  • Installing 8iR2 on Red Hat Linux 6.1

    Hi, there, I am trying to instaling 8iR2 on Red Hat 6.1 and get the following messege: Xlib: connection to ":0.0" refused by server Xlib: Client is not authorized to connect to Server java.lang.InternalError: Can't connect to X11 window server using