Issues with ondemand Data loader

Hello,
We are facing 2 issues with on demand data loader.
Issue 1
While inserting 'Contacts' and 'Assets' if the 'Account' information is wrong, the records are created with out accounts even though "Account" is a required field.
Issue 2
While inserting records data loader is not checking for duplicates. So duplicate records are getting created.
Kindly advise if anyone has come across similar issues. Thanks
Dipu
Edited by: user11097775 on Jun 20, 2011 11:46 PM

Hello,
We are facing 2 issues with on demand data loader.
Issue 1
While inserting 'Contacts' and 'Assets' if the 'Account' information is wrong, the records are created with out accounts even though "Account" is a required field.
Issue 2
While inserting records data loader is not checking for duplicates. So duplicate records are getting created.
Kindly advise if anyone has come across similar issues. Thanks
Dipu
Edited by: user11097775 on Jun 20, 2011 11:46 PM

Similar Messages

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • Log Issue in HFM data load

    Hi,
    I'm new to Oracle data Integrator.
    I have an issue in log file name. I'm loading data into Hyperion Financial Management through ODI. In the Interface, when we select the IKM SQL to HFM data, we have an option of log file enabled. I made it true and gave the log file name as 'HFM_dataload.log'. After executing the interface when I navigate in to that log folder and view the log file, that file is blank. Also a new file 'HFM_dataloadHFM6064992926974374087.log' is created and the log details are displayed in it. Since I have to automate the process of picking up the everyday log file,
    * I need the log details to be displayed in the specified log name i.e. 'HFM_dataload.log
    Also I was not able to perform any action (copy that newly generated log file into another or send that file in mail) on that log file, since I'm not able to predict the numbers generated along with the specified log file name.
    Kindly help me to overcome this issue.
    Thanks in advance.
    Edited by: user13754156 on Jun 27, 2011 5:08 AM
    Edited by: user13754156 on Jun 27, 2011 5:09 AM

    Thanks a lot for idea.
    I am wonder in HFM data loads. In ODI operator they are showing warning symbol though few records got rejected instead Error. Is it possible to make it fail if one or more records got rejected.
    I have experience with Essbase data loads. if it reaches specified number of records operator will get fail.
    Please guide me if i am missing something.
    Regards,
    PrakashV

  • I am having a issue with getting data useage alerts for my iphone 4s

    I am having a issue with getting data useage alerts for my iphone 4s from AT&T.  I do not download anything huge at all.
    I looked into it and figured out that the phone dials out nightly at 12:29am every night.   I went into my settings and went to general..about..diagnostics and useage..then diagnostics and useage data to see this.  I then clicked don't send...but I am still getting useage alerts.  Can anyone help me please...
    Thanks

    Honestly, from reading the thread linked, they all come off as a bunch of whiney people that cannot be bothered to help themselves.
    Little to nothing in that thread indicates an issue beyond inept consumers.  Yes, I read several pages on the incessant gripes.  Very few made any actual attempts to troubleshoot issues before whining about the "Apple issue" and those that did actual troubleshooting got their issues resolved.
    So no, Apple has nothing to fix beyond a few specific devices that are experiencing hardware issues.
    If you have actually put forth effort and done the basic troubleshooting, take the device to Apple for evaluation and possible replacement.  Whining will get nothing accomplished.

  • Has anyone had issues with Administration\Data Import/Export\Data Import???

    Has anyone had issues with Administration\Data Import/Export\Data Import???
    I have a client who has recently upgraded from V2007 to V8.81. They were succesfuly  using this standard function to import supplier prices to their master price list, but now it has failed?
    I have looked at the file they are importing and it appears to be fine.
    On closer inspection, it did contain approx 46,000 entries, so I took the first 1,000 and created a test file, which imported fine.
    The only issue I found was Speed, with the test file of 1,000 records taking about 30 Mins to import. This appeared to get slower and slower the further through the file it got!
    Based on this, I have estimated that the whole file would takle about 13 hours to import. The client say that when they used to run it on version 2007 it was far quicker?
    In practice, it does appear to run, but the speed is the issue. Having said this, I set the whole file to run last night (over night)and this morning it had appeared to hang after about 2,307 rows, with nothing else being updated.
    Has anyone any ideas or is aware of performance issues like this?
    Thanks,
    Ian

    Always an option, but would you give your clients access to this tool?
    Not sure really.
    I have uploaded a copy of their database onto my test system and run the same routine. Its equally as SLOW
    I can't gauage if its an issue with 8.81 that 2007 didn't have, as I only have the client's word on it, however I have no reason to disbelieve them.
    Kind regards,
    Ian

  • Issue with 2LIS_02_SCL Data source

    Hi Everybody,
    I am facing the below 2 issues with 2lis_02_scl data source,
    1) This is fetching only the records  ETENR (Delivery Schedule Line Counter) value with ' 1 ', It is ignoring others ex:2,3 and 4. Hence Data is not reconciling with ECC system.
    2) The standard field GLMNG is not getting any data, Data was existed in table(EKET) level. So i have written the code and data is coming now. But the problem is, This is not considering the ROCANCEL indicator it seems. All the other key figures values are coming in with Negative sign When ROCANCEL Value is ' X ' or ' R ', But this field is getting all the positive values irrespective of ROCANCEL indicator. Hence showing the incorrect values compared to the ECC.
    Can anybody help me on this,
    Regards,
    Gopinath

    Hi Gopinath:
       Have you already applied any SAP Note to solve this problem?
    Please check if the SAP Note below is applicable to your system.
    668177 - "LIS BW: wrong quantity for documents with invoice plan"
    Regards,
    Francisco Milán.

  • CVC creation - Strange issue with Master data table of 9AMATNR

    Hi Experts,
    We have encountered a strange issue with Master data table (/BI0/9APMATNR) of info object 9AMATNR.
    We have a BADI implemented for checking the valid Characteristic before creation of the CVC using transaction /SAPAPO/MC62. This BADI puts a select on master data tab of material /BI0/9APMATNR and returns no value. But the material actually exists in the table (checked through SE16).
    Now we go inside the info object 9AMATNR and go to the Master data Tab. There we go inside the master table
    /BI0/9APMATNR and activate that. After activating the table it is read by the select statement inside BADI (Strange) and allows the CVC to be created.
    Ideally it should not allow us to activate the SAP standard table /BI0/9APMATNR. I observed that in technical settings of this table it has single record buffering as switched on. (But as per my knowledge buffer gets refreshed every 2 to 4 mins and not in 2 days or something).
    Your expert comment is valuable to us. Thanks.
    Best Regards,
    Chandan Dubey

    Hi Chandan,
                 Try to use a WAIT statment with 5 seconds before your select statment.
    I'm not sure whether this will work. Anyway check it and let me know the result.
    Regards,
    Siva.

  • Performance issue with Oracle data source

    Hi all,
    I've a rather strange problem that I'm stuck on need some assistance on.
    I have a rules file which drags data in via an SQL data source thats an Oracle server. If I cut/paste the 3 sections of "select" "from" and "where" into SQL-Developer and run the query, it takes less than 1 second to complete. When I run the "load data" with this rule file or even use the "Retrieve" with the rules file edit, it takes up to an hour to complete/retrieve the data.
    The table in question being used has millions of rows and I'm using one of the indexed fields to retrieve the data. It's as if the Essbase/Rule file is ognoring the index, or I have a config issue with the ODBC settings on the server that is causing the problem.
    ODBC.INI file entry for the Oracle server as follows (changed any sensitive info to xxx or 999).
    [XXX]
    Driver=/opt/data01/hyperion/common/ODBC-64/Merant/5.2/lib/ARora22.so
    Description=DataDirect 5.2 Oracle Wire Protocol
    AlternateServers=
    ApplicationUsingThreads=1
    ArraySize=60000
    CachedCursorLimit=32
    CachedDescLimit=0
    CatalogIncludesSynonyms=1
    CatalogOptions=0
    ConnectionRetryCount=0
    ConnectionRetryDelay=3
    DefaultLongDataBuffLen=1024
    DescribeAtPrepare=0
    EnableDescribeParam=0
    EnableNcharSupport=0
    EnableScrollableCursors=1
    EnableStaticCursorsForLongData=0
    EnableTimestampWithTimeZone=0
    HostName=999.999.999.999
    LoadBalancing=0
    LocalTimeZoneOffset=
    LockTimeOut=-1
    LogonID=xxx
    Password=xxx
    PortNumber=1521
    ProcedureRetResults=0
    ReportCodePageConversionErrors=0
    ServiceType=0
    ServiceName=xxx
    SID=
    TimeEscapeMapping=0
    UseCurrentSchema=1
    Can anyone please advise on this lack of performance.
    Thanks in advance
    Bagpuss

    One other thing that I've seen is that if your Oracle data source and Essbase server are in different geographic locations, you can get some delay when it retrieves data over the WAN. I guess there is some handshaking going on when passing the data from Oracle to Essbase (either by record or groups of records) that is slowed WAY down over the WAN.
    Our solution to this was remove teh query out of the load rule, run it via SQL+ on a command line at the geographic location where the Oracle database is, then ftp the resulting file to where the Essbase server is.
    With upwards of 6 million records being retrieved, it took around 4 hours in the load rule, but running the query via command line took 10 minutes, then the ftp took less than 5.

  • Issue with Logical data clear in ASO

    Since we upgraded to version 11 we have done away with clearing data from the ASO cube using #MI files, and have started using the Clear Data In Region MDX functionality. From my understanding, when doing a logical clear it should load offsetting values to a new slice, that will result in a value of 0 being retrieved from the DB.
    The problem is that there is a 2-3 minute window between the clear script and the new data load where users are pulling not 0's, but incomplete data. Again, from my understanding this is not how a logical delete should act. This process runs every 2 hours, so there is a 2-3 minute window every 2 hours where the data users see may be incorrect. If we can't resolve this issue we will have to go back to loading #MI to clear data from the ASO DB which we are hoping to avoid. Also, we can't do a physical delete because it takes too long.
    Any ideas? Am I misinterpreting the Logical Delete functionality?
    Thanks in advance.

    Just to confuse matters, this problem is intermittent and I haven't been able to successfully replicate it in our Test environment.
    That would seem to indicate something else was going on in the DB that was interfering with the clear, but the logs aren't showing any errors, locks, etc that could have caused the problem.

  • Peformance issue with iSetup for loading fnd_flex_value_norm_hierarchy recs

    Hi
    The customer site where I am working in currently has implemented isetup to load data from Hyperion DRM to Oracle GL. They are currently on 11i.AZ.F patch level.
    The customer has constantly had problems in two areas with iSetup -
    1. iSetup has a limitation that all existing children records plus new ones being added have to sent in the XML file provided as input to iSetup. For eg. one parent ENTXXX might have 15000 children records i.e records in FND_FLEX_VALUE_NORM_HIERARCHY. Now when 2 children are removed from this parent in Hyperion DRM and these two records sent to Oracle iSetup for getting deleted from the parent, the current implementation stipulates that iSetup requires in the XML input file the remaining 14998 records. Thats how it knows to remove the two nodes. This is a huge performance issue.
    They are looking at loading into iSetup only the two deleted nodes with an action code of DELETE and let iSetup handle this rather than having to provide the entire list of children less the removed nodes for iSetup.
    2. Can we load using the final xml file directly by calling any Java class/process directly rather than going through iSetup Loader Concurrent Request?
    3. Is there any documentation on how to use the isetup Java classes?
    Will upgrading to 11i.AZ.H patch level solve any of the above concerns/issues?
    Regards,
    Richard

    #1. I guess, you are referring to the problems that you are encountering while loading GL COA. You may log SR against GL to know more details on how API loads data to instance. DELETE is a specific requirement in your case and I would suggest you to work with GL team and they may provide some solution or workaround to overcome this performance issue.
    #2. No, concurrent program does good amount of pre-processing that you would not get if you directly call the java classes.
    #3. Not sure, what you are exactly looking at. Are you looking for user guide to write your own iSetup API classes?
    11i.AZ.H has got good amount of performances fixes overall and it is recommended release on top of 11.5.10.2CU. I would suggest you to upgrade to 11i.AZ.H.
    Specific to GL COA issue, I don't think 11i.AZ.H would really help you much. It is very much functional issue with respect to API and you have to work with GL team to get workaround/solution. This may involve customizing the API according to your requirement.
    Thanks
    Mugunthan.

  • Issues with 0PP_C03 delta loads

    Hi,
    I loaded the data from 2LIS_04_P_ARBPL & 0PP_WCCP to 0PP_C03. Initialization of the data load went fine. Did the delta loads to BI and observed that I am not getting any of the Key figure data in the cube. All the values are showing as Zero.
    When I observed the in the PSA table and in the RSA3 , I observed that for every order 2 entries got created, One is with +ve values and another with u2013ve. So at the end while updating to cube its nullifying the values. Because of this reason I am not able to view the latest data which is updated as Deltas.
    I am not sure what settings I missed. Could some one please help me to fix this issue.
    Thanks & Regards,
    Shanthi.

    Thanks Francisco Milan and Shilpa for the links. Itu2019s very useful. But still I didnu2019t able to find the cause of the issue.
    My data source 2LIS_04_P_ARBPL is of ABR type and the update mode for KF is Summation. In the data source level, I am getting the values with before & after image (same entries one with +ve & another with u2013ve) and as I am using the u201CSummation as my update type for KF its getting nullified.
    Because of this reason I could not able to get the values in my report. Could any one pls help me on this as I am reaching the go-live date. I need to address this issue immediately. Thanks for all your inputs.
    Regards,
    Shanthi.

  • Issues with Infoview data refresh on Crystal Reports based on BW BeX query

    Hi all,
    We applied the Business Objects XI3.1 fix pack 3.1 with Integration toolkit fix pack 3.1 in our environment.
    After that we started having trouble with Crystal Reports based on BeX queries that use manual input variables.
    The data refresh would not work in Infoview.
    The data refresh would work using the Crystal Reports designer gui on local machine.
    regards,
    Abhishek

    Hi all,
    This thread is for the benefit of all the BW/BO people who faced issues with BO reports not working after certain fix pack installations.
    After lot of time spent in debugging this and researching and some help from SAP, we found that BASIS had missed a step in the BO patching. This step was relating to applying some BW transports in the BW system related to the new Integration toolkit.
    Please use below notes for the BW transport task of patching BOE Integration toolkit.
    Refer to Note#1472104 which explains about loading the
    transports when we install BO Service Packs or Fix Packs. Also please refer to the Note#1271751 for the advice on transports for BW systems.
    Please go through page 206 (Configuring transports) in the SAP Integration Kit installation guide. You can download this document at below link.
    https://websmp106.sap-ag.de/~sapidb/011000358700000559912010E/xi31_sp3_bip_sap_inst_en.pdf
    Refer to Note#1345919 which explains about the process of loading transports for SAP IK.
    regards,
    Abhishek

  • Problem with the data loads

    Hi,
    We have a daily data load to ODS and then to the CUBE. Yesterday what happened is that the load to ODS has been taken place for 140 times and the data is not activated in the ODS. We have deleted the requests from the ODS and did the manual load to ODS and activated successfully. My question is that what causes this job to repeat such many number of times. The jobs log is as follows.
    Job started
    Step 001 started (program ZBIXX_FDA_START_PROCESS_CHAIN, variant ZCUST_DAILY, user ID BWBATCH)
    Chain Is OK
    Chain ZPC_FDA_CUSTDAILY_TRAN was removed from scheduling
    Program RSPROCESS successfully scheduled as job BI_PROCESS_ABAP with ID 05302100
    Program RSPROCESS successfully scheduled as job BI_PROCESS_DROPINDEX with ID 05302100
    Program RSPROCESS successfully scheduled as job BI_PROCESS_INDEX with ID 05302100
    Program RSPROCESS successfully scheduled as job BI_PROCESS_LOADING with ID 05302100
    Program RSPROCESS successfully scheduled as job BI_PROCESS_LOADING with ID 05302101
    Program RSPROCESS successfully scheduled as job BI_PROCESS_ODSACTIVAT with ID 05302100
    Program RSPROCESS successfully scheduled as job BI_PROCESS_TRIGGER with ID 05302100
    Chain ZPC_FDA_CUSTDAILY_TRAN Was Activated And Scheduled
    Chain Is OK
    The same log is repeated for 140 times in the log window.
    Please Advise.

    hi,
    That is not a problem finally the jobs succes right.
    and the thing is it is performance problem suppose first time it take error to study the issue and rectify the first instance it is the best way
    Regards,
    Lakshmi

  • Inventory 0IC_C03, issue with historical data (data before Stock initializa

    Hi Experts,
    Inventory Management implementation we followed as bellow.
    Initailization data and delta records data is showing correctly with ECC MB5B data, but historical data (2007 and 2008 till date before initialization) data is not showing correctly (stock on ECC side) but showing only difference Qunatity from Stock initialization data to date of Query.
    we have done all the initial setting at BF11, Process keys and filed setup table for BX abd BF datasources, we are not using UM datasource.
    1 we loaded BX data and compressed request (without tick mark at "No Marker Update)
    2. initialization BF data and compressed request (with tick mark at "No Marker Update)
    3 for deltas we are comperessing request on daily (without tick mark at "No Marker Update).
    is this correct process
    in as you mentioned for BX no need to compress ( should not compress BX request ? )
    and do we need to compress delta requets ?
    we have issue for historial data validation,
    here is the example:
    we have initilaized on may 5th 2009.
    we have loaded BX data from 2007 (historical data)
    for data when we see the data on january 1st 2007, on BI side it is showing value in negative sign.
    on ECC it is showing different value.
    for example ECC Stock on january 1st 2007 : 1500 KG
    stock on Initialization may 5th 2009 : 2200 KG
    on BI side it is showing as: - 700 KG
    2200 - (-700) = 1500 ,
    but on BI side it is not showing as 1500 KG.
    (it is showing values in negative with refence to initialization stock)
    can you please tell, is this the process is correct, or we did worng in data loading.
    in validity table (L table) 2 records are there with SID values 0 and -1, is this correct
    thanks in advance.
    Regards,
    Daya Sagar
    Edited by: Daya Sagar on May 18, 2009 2:49 PM

    Hi Anil,
    Thanks for your reply.
    1. You have performed the initialization on 15th May 2009.
    yes
    2. For the data after the stock initialization, I believe that you have either performed a full load from BF data source for the data 16th May 2009 onwards or you have not loaded any data after 15th May 2009.
    for BF after stock initialization delta data, this compressed with marker update option unchecked.
    If this is the case, then I think you need to
    1. Load the data on 15th May (from BF data source) separately.
    do you mean BF ( Material movements) 15th May data to be compressed with No Marker Update option unchecked. which we do for BX datasource ?
    2. Compress it with the No Marker Update option unchecked.
    3. Check the report for data on 1st Jan 2007 after this. If this is correct, then all the history data will also be correct.
    After this you can perform a full load till date
    here till date means May 15 th not included ?
    for the data after stock initialization and then start the delta process. The data after the stock initialization(after 15th May 2009) should also be correct.
    can you please clarify these doubts?
    Thanks
    Edited by: Daya Sagar on May 20, 2009 10:20 AM

  • Issue with xsd Data type mapping for collection of user defined data type

    Hi,
    I am facing a issue with wsdl for xsd mapping for collection of user defined data type.
    Here is the code snippet.
    sample.java
    @WebMethod
    public QueryPageOutput AccountQue(QueryPageInput qpInput)
    public class QueryPageInput implements Serializable, Cloneable
    protected Account_IO fMessage = null;
    public class QueryPageOutput implements Serializable, Cloneable
    protected Account_IO fMessage = null;
    public class Account_IO implements Serializable, Cloneable {
    protected ArrayList <AccountIC> fintObjInst = null;
    public ArrayList<AccountIC>getfintObjInst()
    return (ArrayList<AccountIC>)fintObjInst.clone();
    public void setfintObjInst(AccountIC val)
    fintObjInst = new ArrayList<AccountIC>();
    fintObjInst.add(val);
    Public class AccountIC
    protected String Name;
    protected String Desc;
    public String getName()
    return Name;
    public void setName(String name)
    Name = name;
    For the sample.java code, the wsdl generated is as below:
    <?xml version="1.0" encoding="UTF-8" ?>
    <wsdl:definitions
    name="SimpleService"
    targetNamespace="http://example.org"
    xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
    xmlns:tns="http://example.org"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
    xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
    xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/"
    >
    <wsdl:types>
    <xs:schema version="1.0" targetNamespace="http://examples.org" xmlns:ns1="http://example.org/types"
    xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:import namespace="http://example.org/types"/>
    <xs:element name="AccountWSService" type="ns1:accountEMRIO"/>
    </xs:schema>
    <xs:schema version="1.0" targetNamespace="http://example.org/types" xmlns:ns1="http://examples.org"
    xmlns:tns="http://example.org/types" xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:import namespace="http://examples.org"/>
    <xs:complexType name="queryPageOutput">
    <xs:sequence>
    <xs:element name="fSiebelMessage" type="tns:accountEMRIO" minOccurs="0"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="accountEMRIO">
    <xs:sequence>
    <xs:element name="fIntObjectFormat" type="xs:string" minOccurs="0"/>
    <xs:element name="fMessageType" type="xs:string" minOccurs="0"/>
    <xs:element name="fMessageId" type="xs:string" minOccurs="0"/>
    <xs:element name="fIntObjectName" type="xs:string" minOccurs="0"/>
    <xs:element name="fOutputIntObjectName" type="xs:string" minOccurs="0"/>
    <xs:element name="fintObjInst" type="xs:anyType" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="queryPageInput">
    <xs:sequence>
    <xs:element name="fPageSize" type="xs:string" minOccurs="0"/>
    <xs:element name="fSiebelMessage" type="tns:accountEMRIO" minOccurs="0"/>
    <xs:element name="fStartRowNum" type="xs:string" minOccurs="0"/>
    <xs:element name="fViewMode" type="xs:string" minOccurs="0"/>
    </xs:sequence>
    </xs:complexType>
    </xs:schema>
    <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://example.org"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://example.org" xmlns:ns1="http://example.org/types">
    <import namespace="http://example.org/types"/>
    <xsd:complexType name="AccountQue">
    <xsd:sequence>
    <xsd:element name="arg0" type="ns1:queryPageInput"/>
    </xsd:sequence>
    </xsd:complexType>
    <xsd:element name="AccountQue" type="tns:AccountQue"/>
    <xsd:complexType name="AccountQueResponse">
    <xsd:sequence>
    <xsd:element name="return" type="ns1:queryPageOutput"/>
    </xsd:sequence>
    </xsd:complexType>
    <xsd:element name="AccountQueResponse" type="tns:AccountQueResponse"/>
    </schema>
    </wsdl:types>
    <wsdl:message name="AccountQueInput">
    <wsdl:part name="parameters" element="tns:AccountQue"/>
    </wsdl:message>
    <wsdl:message name="AccountQueOutput">
    <wsdl:part name="parameters" element="tns:AccountQueResponse"/>
    </wsdl:message>
    <wsdl:portType name="SimpleService">
    <wsdl:operation name="AccountQue">
    <wsdl:input message="tns:AccountQueInput" xmlns:ns1="http://www.w3.org/2006/05/addressing/wsdl"
    ns1:Action=""/>
    <wsdl:output message="tns:AccountQueOutput" xmlns:ns1="http://www.w3.org/2006/05/addressing/wsdl"
    ns1:Action=""/>
    </wsdl:operation>
    </wsdl:portType>
    <wsdl:binding name="SimpleServiceSoapHttp" type="tns:SimpleService">
    <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
    <wsdl:operation name="AccountQue">
    <soap:operation soapAction=""/>
    <wsdl:input>
    <soap:body use="literal"/>
    </wsdl:input>
    <wsdl:output>
    <soap:body use="literal"/>
    </wsdl:output>
    </wsdl:operation>
    </wsdl:binding>
    <wsdl:service name="SimpleService">
    <wsdl:port name="SimpleServicePort" binding="tns:SimpleServiceSoapHttp">
    <soap:address location="http://localhost:7101/WS-Project1-context-root/SimpleServicePort"/>
    </wsdl:port>
    </wsdl:service>
    </wsdl:definitions>
    In the above wsdl the collection of fintObjInst if of type xs:anytype. From the wsdl, I do not see the xsd mapping for AccountIC which includes Name and Desc. Due to which, when invoking the web service from a different client like c#(by creating proxy business service), I am unable to set the parameters for AccountIC. I am using JAX-WS stack and WLS 10.3. I have already looked at blog http://weblogs.java.net/blog/kohlert/archive/2006/10/jaxws_and_type.html but unable to solve this issue. However, at run time using a tool like SoapUI, when this wsdl is imported, I am able to see all the params related to AccountIC class.
    Can some one help me with this.
    Thanks,
    Sudha.

    Did you try adding the the XmlSeeAlso annotation to the webservice
    @XmlSeeAlso({<package.name>.AccountIC.class})
    This will add the schema for the data type (AccountIC) to the WSDL.
    Hope this helps.
    -Ajay

Maybe you are looking for

  • IM3 upgrade to IM6 Amatuer Question...Does IM create HD by default?

    Sorry if this seems a like a bit of a simple question. I have recently upgraded to IM6 from IM 3.0.3. I'm just wondering if IM6 "HD" means that the projects I create and burn are High Definition by default? Thanks for any and all advice. Adam

  • How can I build

    Hi, I have a Debian Server with Tomcat/5.5.9. It works fine. But I think, I didn't install the J2EE correct. I tried to install "j2eesdk-1_4_02_2005Q2-linux.bin" but I only get that error: "./j2eesdk-1_4_02_2005Q2-linux.bin: error while loading share

  • Netui defautvalue

    Hi folks, I have a probelm in netui. I have one pageFlowController there I am populating a UserInfo object.That object will have userId and password.I getting the information in pageflowcontroller and setting it to session.Now if I want to get the va

  • RUC extranjero

    Tenia la siguiente consulta en el socio de negocios tenemos empresas Peruanas que su coidgo es 11 digitos (que es el RUC), empresas ecuatorianas que tienen 13 digitos y ahora tenemos empresas brasileñas que tienen 14 digitos pero no me deja ingresar

  • Recrutimetn pba7

    hi see in recrutiement if i am transfering data from recruitment to pa through it is giving an error that no referance activity is done .But i heard that with out doing referance activity in pb60 the data will be transffered itseems how it is possibl