Solution: multiple diags with name efa.dat found

This is a solution to a problem I hit. When I tried to run a diag on the model it would throw the error:
sims: locating diag efa.dat
sims: Looking for diag under $SIMS_LAUNCH_DIR
sims: Caught a SIGDIE. multiple diags with name efa.dat found at /import/dtg-data20/jj155244/OpenSPARCT2/tools/src/sims,1.272 line 4581.
Solution: do not run in the $DV_ROOT directory. Create a subdirectory for the run or run elsewhere.
A efa.dat file is created in the run directory by sims. There is also an efa.dat in $DV_ROOT/verif/diag/assembly/include/efa.dat. sims looks for the efa.dat file starting in the run directory. It finds both the files and complains about finding multiple files.

1. Use the index with the lower fragmentation to identify the newly rebuilt index (as it almost always will have lower fragmentation)
2. To reproduce, block the online index rebuild process by trying to alter the table's definition in a transaction (and don't commit, which will place schema lock on the table)
Satish Kartan http://www.sqlfood.com/

Similar Messages

  • Error:configuration with name 'default' not found.

    Hi all. I am fairly new to android programming. I am trying to use the CreativeSDK, specifically the image editing portion. I have been trying to follow the steps at the following link: Adobe Creative SDK
    I first edit my settings.gradle file and then sync... This works with no error.
    I then go on to edit my build.gradle (making sure i edit the modular file). After editing it, I get prompted with "sync project," to which I accept. I then get the following error:
    "error:configuration with name 'default' not found."
    May someone please guide me or tell me what I should do or what I should read.
    Thank you for your time.
    PS: I hope I posted this in the correct place.

    The Cloud forum is not about using individual programs
    The Cloud forum is about the Cloud as a delivery & install process
    If you will start at the Forums Index https://forums.adobe.com/welcome
    You will be able to select a forum for the specific Adobe product(s) you use
    Click the "down arrow" symbol on the right (where it says All communities) to open the drop down list and scroll

  • Multiple Backups with Same Time/Date

    I did a complete backup, reset and restore for my iPad 2 this morning (to ensure maximum performance with IOS 8.1). It worked like a charm, but now I have two seemingly identical backups in iTunes - one with a date in the title and the other without. I have the same situation with my iPhone 5. To make this even more puzzling, I performed the backup and restore using iCloud, but the backups are on my PC. Can anyone explain what is going on? I'd like to remove redundant backups, if possible, but I'm not sure which ones to get rid of (if any).

    there is no way to do this automatically but you can switch designated TM drives periodically in system preferences->Time machine. another idea would be to use a different backup solution for your secondary backup. make a bootable clone on the second drive using CCCloner or Superduper. both of those utilities let you schedule backups automatically. and this way you won't rely on a single backup program, plus you'll get extra benefits of bootable clone backups which TM does not offer. that's what many people (myself included) do.

  • Console logging multiple crashes with name "systemstatsd" (Mavericks)

    Ever since installing mavericks on my macbook pro ( mid 2009 ) , system keeps freezing every few minutes ( every 2 minutes ). It stays frozen for about 2 minutes. Doesn't matter what Application I am using ... its the same ( hang after a few minutes ).
    I looked at Console app ( it itself has shows as not responding on Activity Monitor when I try to look at the crash reports ). Its been logging multiple crash reports under the name " systemstatsd " .... I have over 50 crash reports by now.
    Here is a copy of what is inside those crash reports:
    https://docs.google.com/document/d/1zZOHt88SYCtAnxZSn9QLbb9y9Q0OMPU-yI1D1Q4QkAU/ edit?usp=sharing
    Here is EtreCheck Output:
    https://docs.google.com/document/d/1JwCcLgh4vvun3NjWjh1iXOfNbjL9kJicdjrMLtzjbGs/ edit?usp=sharing
    Few more things I tried:
    Verify disk (Macintosh HD) --- it failed
    Repair disk --  I repaired it after verify disk failed
    Problem is still there repairing disk didn't do anything
    Please help

    Same problem with constant systemstatsd crashes - every 4 mins or so.
    Computer is freezing for only a second or two each time.
    Clean install of Mavericks 10.9.1 on a mac mini server Macmini4,1
    Verify / repair disk revealed no problems.

  • Handling multiple cursors with column of date datatype  issue reg

    Dear all,
    i am using three cursors in one pl/sql block each of this three cursor will fetch data based on column of date data type.i believe due to each condition relate to same table,contradiction may arise i.e
    1> when code will execute , cursor c1 will do the job perfectly but c2 & c3 here may not be able to solve the purpose since the job has already been done by cursor c1 which i am not sure.my doubts i need all cursor to work based on each condition.
    any suggestion from your side could sort out this issues.
    Thanks n regards
    Laxman
    begin
    for c1 in(select reqid from request where lastmoddate<sysdate - 8/24 and statuscode=1 and assigned_personid is not null)loop
    update srequest set status=open where reqid=c1.reqid;
    commit;
    end loop;
    for c2 in(select reqid from request where lastmoddate<sysdate - 1 and statuscode=1 and assigned_personid is null)loop
    update srequest set status=open where reqid=c2.reqid;
    commit;
    end loop;
    for c3 in(select reqid from request where lastmoddate<sysdate - 14 and statuscode=1 and assigned_personid is null)loop
    update srequest set status=open where reqid=c3.reqid;
    commit;
    end loop;

    If you look, your third cursor has already been included in your second cursor (sysdate - 14 < sysdate - 1), so there's only two conditions that you need to run.
    You're also doing row-by-row aka slow-by-slow processing. It would be much, much better (*) if you converted this to one SQL statement - I've rewritten it to be a MERGE statement:
    merge into srequest sreq
    using (select reqid
           from   request
           where  statuscode = 1
           and    ((lastmoddate < sysdate - 8/24
                    and assigned_person_id is not null)
                   or
                   (last_moddate < sysdate - 1
                    and assigned_person_id is null))) req
      on (sreq.reqid = req.reqid)
    when matched then
      update
        set status = 'OPEN';(*) more performant, easier to read, debug and maintain.

  • Multiple rows with single date

    I am using a CDC control task to extract and load data from our OLTP source to our Data Warehouse.
    For our Date dimension, we have a DateKey which is a
    DATETIME stamp (created using SSAS dimension wizard). Time is always 00:00:00 as a DATETIME
    value is required for populating the dates.
    Our fact table destination uses this DateKey as a FK.
    Our fact table OLTP data source contains multiple records with a single DATE and TIME field. This fine as I can convert the DATE field to a DATETIME using a derived column.
    However, as our Date dimension has granularity of a day, all we require for our fact table is the most recent daily record from our OLTP data source.
    At present, the data flow task is attempting to write all records and failing on PK constraint.
    How would I go about traversing all records for each date and loading only the most recent for that day? Is there a better way to go about this?

    If you wish to get only one record for today, you can simply use in your source query:
    SELECT TOP 1 ... FROM OLTPdata ORDER BY DEATETIMEcolumn DESC
    if you need one record for each date, you may use ranking functions:
    WITH Q AS (
    SELECT
    ROW_NUMBER() OVER (PARTITION BY CAST(DateTimeColumn AS DATE) ORDER BY DateTimeColumn DESC, some tie breaker column) AS ROWNUM,
    FROM  OLTPData
    SELECT * FROM Q WHERE ROWNUM=1

  • "no data found" run-time error masking SQL/report mismatch

    Hi all,
    At last, figured out a vexing problem and wondering if anyone else either:
    a) has also hit the problem, and hopefully
    b) has figured out a clever way around it.
    Namely, in our AppEx apps, we rely on SQL query generation from PL/SQL packaged functions. This "best practice" promotes reuse, automated testing, etc. Great idea - works great.
    However, we've repeatedly come across a situation where we go to run a page with a report on it only to get a "report error: ORA-01403: no data found" message where the report should be. Not much to go on. After trial and error, it turns out that simply going to the Region Definition page (where the PL/SQL function call is defined) and clicking the "Apply Changes" button cleared the problem up.
    Mystifying because the actual SQL query generated by the PL/SQL is valid (we've got a nightly testing job that pulls the PL/SQL function calls out of the AppEx metadata tables, executes them to get back the SQL and then validates the SQL).
    Turns out this problem looks to be a result of columns changing in the actual SQL itself, and hence not matching up to the Region Attributes (column names, one assumes) that AppEx knows about. Simply clicking Apply Changes causes AppEx to validate the returned query and then it adjusts the column attributes (one assumes) so that things match up.
    So - the $64,000 question(s):
    1) Are there any cool AppEx APIs to be able to try and detect this situation? Given an app of middling complexity (50-100 pages, each with various queries/reports), this is not an attractive issue to deal with manually.
    2) Any cool AppEx APIs to fix, or auto-sync these situations? (Essentially programmatically calling the "Apply Changes" button if you will).
    At a minimum, it would be great if AppEx could be updated to put out some kind of more informative error message when this occurs - maybe something along the lines of "Region Attributes Do Not Match Data Returned from Query", or something like that at least.
    Thanks for any input/ideas,
    Jim C.

    Thanks to all for your prompt responses.
    Vikas actually did me the favor of pretty much clarifying my info for me (tks Vikas). Yes, to all the above. It's PL/SQL code generating a SQL query, so 1 is (a); we want to use query-specific columns so it is (2a). And yes, the whole problem is that the something does change to cause the SELECT statement column list to change...nature of the beast, so "don't do that" doesn't really help here.
    Scott - sorry, should have been more explicit. Basically, we have a PL/SQL function behind a report that returns a SQL statement for the report. If that PL/SQL code changes to add a new column to the report (without going to the corresponding Report Attributes page and clicking the "Apply Changes" button to get AppEx to revalidate the query), then you wind up with this "no data found" error msg, which doesn't exactly point you to the root of the problem.
    It seems as though the "parse at compile-time" is really what's going on here. There must be some kind of "run-time" check going on as well, that is resulting in the "no data found" message. Seems as though it ought to be fairly straightforward to add some kind of check at run-time to handle that exception a little cleaner. Is there an official process to register a "Request for Process Enhancement" for AppEx to do this?
    In the meantime, thank you Vikas for the pointer to the APEX_APPLICATION_PAGE_RPT_COLS view - that looks like it will do the trick nicely. Given that, we can now add logic to our nightly "app tester" job that can compare what columns AppEx expects to find in a given report (for a given page) with the actual SQL (coming back from the PL/SQL function call) to essentially "validate" the AppEx meta data and at least let us know when these things get out of sync.
    BTW - if anyone would be interested in the actual contents of that "app tester" logic, I'd be happy to post it (someplace...here? Studio site?). It's basically just a PL/SQL block of code that currently runs in cron that just validates any SQL embedded in our app. (I suppose it is a little "hard-coded" since it does use our naming conventions for packages/functions to parse the PL/SQL calls from the Meta Data but it might still serve as a usefull starting point...) Since our AppEx app(s) sit on top of a database schema that is in fairly constant flux, we need the ability to know when somebody has changed something in the schema that needs to be accounted for in AppEx. The job primarily just parses the AppEx meta data to find PL/SQL function calls that return SQL, executes that PL/SQL to get the generated SQL, then just validates that SQL and reports back any invalid SQL calls. Perhaps we're in some unusual development environment (15-20 people working on a database schema with 700-800 tables/views) but it seems as though it would be fairly easy, for anybody using PL/SQL to generate SQL (which is a GREAT and powerful thing, by the way - thanks to whoever thought that up in AppEx land) to run into this issue.
    Jim C.

  • Error processing request  ORACLE-01403:no data found

    We surmounted the blank page issue by applying the patch to APEX 4.2.2.
    Now we patched/upgraded to APEX 4.2.3, and no longer have the blank page issue, but have this one that seems similar:
    I am trying here, since I am not getting any debug messages - the POST goes in and immediately comes back with the no data found error.
    APEX 4.2.3
    APEX Listener 2.0.2
    Glassfish 4.0
    Apps install OK, and can display the Login page (Show), but when trying to Login (Accept),
    immediately get this error:
    Error Error processing request
              ORA-010403: no data found
    Debug messages only shows Show entries - nothing , not even the Accept first line.
    Firebug shows similar - upon Post (which looks OK) the immediate response is the error page and message.
    I do not have access to the APEX Listener logs tonight,  but expect to get to them, or have someone check them
    out, tomorrow AM.
    Any thoughts or suggestions?
    This happens across all apps (15) except for one.  Am still trying to discern the different in that app vs the others.
    Thank you -

    An update:
    I have verified that some apps work, and some apps show the
    Error Error processing request
            ORA-01403: no data found
    error.
    Happens across different workspaces.
    Have confirmed that for an app that has the error,
       I get the error using the APEX Listener deployment. 
       I do NOT get the error using the HTTP server deployment.
    This is the same in a Linux environment and in a Windows environment.
    For one app, I have an earlier version that gives the error, a later version (new app #) that does not give the error.
    APEX 4.2.3
    APEX Listener 2.0.3.221.10/3
    Glassfish 4.0
    Again even at  Debug Level 9, I get no debug entries for the POST
    For an app that gives the error, the APEX Listener log entries are (from the POST):
    ==== Processing Request: ====
    Attempting to process with PL/SQL Gateway
    user-agent: Mozilla/5.0 (Windows NT 5.1; rv:23.0) Gecko/20100101 Firefox/23.0
    host: qdcls1534:8082
    Applied database connection info
    POST /apex/apexd2/wwv_flow.accept
    ==== Headers in Request ====
    ==== Cookies in Request ====
    content-length: 412
    content-type: application/x-www-form-urlencoded;charset=UTF-8
    request parameter: p_flow_id=440
    request parameter: p_flow_id=440
    request parameter: p_flow_step_id=101
    request parameter: p_instance=1712832201985
    request parameter: p_instance=1712832201985
    request parameter: p_page_submission_id=8493461183923
    request parameter: p_page_submission_id=35145982114211
    request parameter: p_request=LOGIN
    request parameter: p_request=
    request parameter: p_debug=LEVEL9
    request parameter: p_debug=LEVEL9
    request parameter: p_arg_names=13616030313099104709
    request parameter: p_arg_names=13616030522289104711
    request parameter: p_t01=karen.x.cannell
    request parameter: p_t02=sssssssss
    request parameter: p_md5_checksum=
    request parameter: p_page_checksum=C2302553B43577579FF77585749CA016
    Using Procedure:wwv_flow.accept
    request parameter: p_flow_step_id=101
    Requesting Pool:apexd2
    pool exists: apexd2
    isValidRequest(), procedure name: <wwv_flow.accept>
    Validating: wwv_flow.accept
    *** Total number of arguments: 510
    SID: 214
    *** Total number of arguments: 510
    begin
    wwv_flow.accept(p_flow_step_id=>?,
    p_md5_checksum=>?,
    p_arg_names=>?,
    p_page_checksum=>?,
    p_t02=>?,
    p_t01=>?,
    p_debug=>?,
    p_request=>?,
    p_page_submission_id=>?,
    p_flow_id=>?,
    p_instance=>?);
    commit;
      end;"
    Parse: 0 ms
    p_flow_step_id= null
    p_flow_step_id=[101, 101]
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_request=[LOGIN, ]
    p_md5_checksum=
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_arg_names=[13616030313099104709, 13616030522289104711]
    p_arg_names: {13616030313099104709, 13616030522289104711}
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_instance= null
    p_instance=[1712832201985, 1712832201985]
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_flow_id= null
    p_flow_id=[440, 440]
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_page_submission_id= null
    p_page_submission_id=[8493461183923, 35145982114211]
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_request= null
    p_md5_checksum=
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_debug= null
    p_debug=[LEVEL9, LEVEL9]
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_t01= karen.x.cannell
    p_t01=karen.x.cannell
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_t02= sssssssss
    p_t02=sssssssss
    {p_md5_checksum=, p_page_submission_id=[8493461183923, 35145982114211], p_debug=[LEVEL9, LEVEL9], p_page_checksum=C2302553B43577579FF77585749CA016, p_request=[LOGIN, ], p_t02=sssssssss, p_t01=karen.x.cannell, p_flow_id=[440, 440], p_instance=[1712832201985, 1712832201985], p_flow_step_id=[101, 101], p_arg_names=[13616030313099104709, 13616030522289104711]}
    p_page_checksum= C2302553B43577579FF77585749CA016
    p_page_checksum=C2302553B43577579FF77585749CA016
    Exec: 37 ms
    ==== Headers from Results ====
    Setting Content-Type (Content-type): text/html; charset=UTF-8
    Got results length: 307
    Processed PL/SQL Gateway request
    ==== Request Processed ====
    It just stops ...
    Any ideas?
    I have tried all of the Compatibility settings, no difference.
    If it is something I can change in the app, that would be great.
    The app that does not work was imported from an APEX 4.1 instance.
    The app that does work was imported from an APEX 4..3 instance (Windows, APEX 4.2.3, APEX Listener 2.0.3, recently upgraded from APEX 4.2.2 and APEX Listener 2.0.2).
    Any help or suggestions will be greatly appreciated,
    Karen

  • X-distr.chain status in combination with valid from date

    Hello I am using MM02 X-distr.chain status in combination with valid from date, to give materials a certain status in the sales order depending on the delivery date of the orderline.
    How can I enter multiple statuses with future validation dates?
    example following status with validation dates where entered in MM02 for a material
    - material should have status A valid from January
    - material should have status B valid form April
    - material should have status C valid from July
    - material should have status D valid from September
    When I enter a sales order with delivery date in January system replies with status C.
    When I enter a sales order with delivery date in April system replies with status C .
    When I enter a sales order with delivery date in July system replies with status C .
    When I enter a sales order with delivery date in September system replies with status D.
    So only the current  ( or previous status when current is not valid yet) is retained by the system.
    How could multiple statuses be used?
    Thanks in advance
    Joos

    Hello Jalo,
    Thanks for your reply,
    My customer sell season relevant materials so they want to control order entry and delivery creation based on predefined dates.
    Status A  = material is blocked for order entry and delivery creation
    Status B  = material is allowed for order entry, delivery creation is blocked
    Status C  = material is allowed for order entry and delivery creation.
    Status D  = material is blocked for order entry, delivery creation is allowed.
    I had hoped to use standard functionality with this status field, it meets most of my customers requirements but at this moment the only alternative is to create a new table where this data can defined and create some abap logic in the user exit to set the status base on the table information.
    Regards
    Joos

  • Error! No data source found with name 'mynewdsname' (after asking 0 providers)

    Hi all,
    I am trying out the instructions given below.
    http://dev.day.com/docs/en/cq/current/developing/jdbc.html
    I followed them exactly but removed <cq:include script="head.jsp"/> line from the jsp since I do not have a head.jsp.
    my config node settings are as follows.
    But when I go to the page, I get below error message.
    error! No data source found with name 'mynewdsname' (after asking 0 providers)
    DB is up and running. I could not find any issue with it. Code is as follows.
    <%DataSourcePool dspService = sling.getService(DataSourcePool.class);
      try {
         DataSource ds = (DataSource) dspService.getDataSource("mynewdsname");  
    %>
    Any help will be great. TX

    Document is outdated & for now could you please follow http://dev.day.com/content/kb/home/cq5/Development/HowToConfigureSlingDatasource.html

  • In Windows 7 using Adobe Reader XI (11.0.07) was able to copy an item (name, number, date) and paste in another document.  In Windows 8.1 using same version of Adobe Reader XI (11.0.07) not able to do this.  Any solutions?

    In Windows 7 using Adobe Reader XI (11.0.07) was able to copy an item (name, number, date) and paste in another document.  In Windows 8.1 using same version of Adobe Reader XI (11.0.07) not able to do this.  Any solutions?

    With computer running Windows 7 using the curser can select an item, then right click and select copy.   With computer running Windows 8 there is a hand instead of a curser therefore unable to select an item, therefore unable to copy.  Is there a way to have a curser rather than the hand?
    Eureka!!!   Just found by right clicking on the hand can then select “Select Tool” and then copy.  Hurray!!!

  • [svn] 949: Bug: BLZ-96 - When sending a HttpService request from ActionScript with multiple headers with the same name , it causes a ClassCastException in the server

    Revision: 949
    Author: [email protected]
    Date: 2008-03-27 07:12:59 -0700 (Thu, 27 Mar 2008)
    Log Message:
    Bug: BLZ-96 - When sending a HttpService request from ActionScript with multiple headers with the same name, it causes a ClassCastException in the server
    QA: Yes - try again with legacy-collection true and false.
    Doc: No
    Checkintests: Pass
    Details: Another try in fixing this bug. When legacy-collection is false, Actionscript Array on the client becomes Java Array on the server and my fix yesterday assumed this case. However, when legacy-collection is true, Actionscript Array becomes Java ArrayList on the server. So added code to handle this case.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-96
    Modified Paths:
    blazeds/branches/3.0.x/modules/proxy/src/java/flex/messaging/services/http/proxy/RequestF ilter.java

    Hi all!
    Just to post the solution to this if anyone ever runs accross this thread...
    For some reason i had it bad the first time, don't have time right now to see why but here is what worked for me:
    HashMap primaryFile = new HashMap();
    primaryFile.put("fileContent", bFile);
    primaryFile.put("fileName", uploadedFile.getFilename());
    operationBinding.getParamsMap().put("primaryFile", primaryFile);
    HashMap customDocMetadata = new HashMap();
    HashMap [] properties = new HashMap[1];
    HashMap customMetadataPropertyRoom = new HashMap();
    customMetadataPropertyRoom.put("name", "xRoom");
    customMetadataPropertyRoom.put("value", "SOME ROOM");
    properties[0] = customMetadataPropertyRoom;
    customDocMetadata.put("property", properties);
    operationBinding.getParamsMap().put("CustomDocMetaData", customDocMetadata);
    Basically an unbounded wsdl type is an array of objects (HashMaps), makes sense, i thought i had it like this before, must have messed up somewhere...
    Good luck all!

  • [svn] 931: Bug: BLZ-96 - When sending a HttpService request from ActionScript with multiple headers with the same name , it causes a ClassCastException in the server

    Revision: 931
    Author: [email protected]
    Date: 2008-03-26 11:31:01 -0700 (Wed, 26 Mar 2008)
    Log Message:
    Bug: BLZ-96 - When sending a HttpService request from ActionScript with multiple headers with the same name, it causes a ClassCastException in the server
    QA: Yes - we need automated tests for this basic case.
    Doc: No
    Checkintests: Pass
    Details: RequestFilter was not handling multiple headers with the same name properly.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-96
    Modified Paths:
    blazeds/branches/3.0.x/modules/proxy/src/java/flex/messaging/services/http/proxy/RequestF ilter.java

    Hi all!
    Just to post the solution to this if anyone ever runs accross this thread...
    For some reason i had it bad the first time, don't have time right now to see why but here is what worked for me:
    HashMap primaryFile = new HashMap();
    primaryFile.put("fileContent", bFile);
    primaryFile.put("fileName", uploadedFile.getFilename());
    operationBinding.getParamsMap().put("primaryFile", primaryFile);
    HashMap customDocMetadata = new HashMap();
    HashMap [] properties = new HashMap[1];
    HashMap customMetadataPropertyRoom = new HashMap();
    customMetadataPropertyRoom.put("name", "xRoom");
    customMetadataPropertyRoom.put("value", "SOME ROOM");
    properties[0] = customMetadataPropertyRoom;
    customDocMetadata.put("property", properties);
    operationBinding.getParamsMap().put("CustomDocMetaData", customDocMetadata);
    Basically an unbounded wsdl type is an array of objects (HashMaps), makes sense, i thought i had it like this before, must have messed up somewhere...
    Good luck all!

  • Document Creation error - "We're sorry. We can't open document name because we found a problem with its contents"

    Morning Friends,
    I have created a SharePoint 2010 "Site Workflow" that is designed to take information from a form and create a Word doc with the gathered information and store this Word doc in a document library.
    I am using Sharepoint 2013 with Office 2013 
    I understand there are a lot of steps (19) outlined below and I can provide more information as needed but the bottom line is this, workflow successfully takes info from an initiation form, uses the info to create a Word doc. Places this Word doc in a library.
    When attempting to open / edit doc, receive error
    "We're sorry. We can't open <document name> because we found a problem with its contents"
    Details - No error detail available.
    Any info or advice would be greatly appreciated. 
    Very high level view of what I have done:
    1 - Created content type called "Letters"
    2 - Added site columns " First Name" and "Last Name"
    3 -  Created and saved to my desktop a very basic Word document (Letter.docx) that says "Hello, my name is XXXX XXXX"
    4 - In the advanced settings of the "Letters" content type I uploaded this "Letter.docx" file as the new document template.
    5 - Created a new document library called "Letters"
    6 - In Library Settings - Advanced Settings, clicked "Yes" to enable the management of content types.
    7 - Then I clicked "Add from existing content types" and added the "Letters" content type
    8 - Back in the advanced settings of the "Letters" content type I selected "Edit Template" and replaced the first XXXX with the Quick Part "First Name" and the second XXXX with the Quick part "Last Name"
    9 - Created a new 2010 Site workflow called "Create a Letter"
    10 - To the workflow I added the action "Create List Item"
    11 - Configured the action to create Content Type ID "Letters" in the document library "Letter" 
    12 - For the "Path and Name" I gave it a basic name of "Letter to"
    13 - The next step was to create the Initiation Form Parameters and added to form entries "First Name" and "Last Name"
    14 - I then linked the initiation form fields to the data source "Workflow Variables and Parameters" to their respective Field from Source parameters
    15 - Went back to the "Path and Name" and modified the basic name of "Letter to" to include the first and last name parameters.
    16 - Saved - published and ran the work flow.
    17 - As expected, Initiation Form prompts for First and Last Name. ("John Doe") Then click "start
    18 - Go to document library "Letters" and see a new Word document created titles "Letter to John Doe" 
    19 - Go to open / edit the Word document and receive the following error
    thoughts? Any info or advice would be greatly appreciated. 

    See this MS support article for SP2010 workflows and generating Word docs:
    https://support.microsoft.com/kb/2889634/en-us?wa=wsignin1.0
    "This behavior is by design. The Create
    List Item action in the SharePoint
    2010 Workflow platform can't convert Word content type file templates to the correct .docx format."
    I've had success in using SP 2013, Word 2013 (saving a .docx as the template instead of .dotx for the document library content type), and an SP 2010 workflow using SP Designer 2013.

  • Report Query returning "No Data Found" with bind variables

    I put a simple query into Report Query:
    Select "bluefish". "name" as "name",
    "bluefish"."primary_flag" as "primary_flag",
    "bluefish"."status" as "status",
    "bluefish"."ID" as "ID" from "bluefish" "bluefish" where "bluefish"."ID" = :P3_XPRINTID
    When I test the query, data is returned; however, when I try to run the query using "Test Report" button, I get an error 01403 No Data Found. If I replace the bind variable with an explicit value, the report works.
    Anyone have any ideas as to what is causing this problem? I am using the Generic Report Layout, with various output types. I AM editting the query and defining the bind variable before I test the report (otherwise, the query wouldn't run).
    Charles

    Sometimes if you create a form/report/whatever using the wizards, it will create an After Submit process called something like Reset Page - basically you want to make sure you don't have any After Submit processes that call a Clear Session state (it may say something like Clear Cache for Page).
    In your branch, there is also a box that says Clear Cache - you want to make sure that does not have page Number 8 in it or it will clear your session state.
    Put the page in Debug mode and read through it - check to make sure your value is getting saved and maybe you can see what is going wrong.

Maybe you are looking for