WLSOAPFaultException: MustUnderstand header not processed

WLSOAPFaultException: MustUnderstand header not processed '{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security'
I have deployed my BSSV to localhost WLS successfully and I can call the published BSSV method from admin console. The results are coming correctly.
Then, In another machine I created a proxy for the above WLS and everything looked fine. Proxy generation was successfull.
Now here is my code:
InventoryManagerPortClient client = new InventoryManagerPortClient();
client.addUNTCredentialProvider("weblogic", "weblogic1");
client.setPortCredentialProviderList();
((Stub)client.getPort())._setProperty(Stub.USERNAME_PROPERTY,"weblogic");
((Stub)client.getPort())._setProperty(Stub.PASSWORD_PROPERTY,
"weblogic1");
((Stub)client.getPort())._setProperty(Stub.ENDPOINT_ADDRESS_PROPERTY, "http://10.139.153.143:7101/context-root-JP410000/" + "InventoryManagerPort");
client.publishedBSSVmethod();
here is the error I'm getting:
java.rmi.RemoteException: SOAPFaultException - FaultCode [{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}MustUnderstand] FaultString [MustUnderstand header not processed '{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security'] FaultActor [null]No Detail; nested exception is:
     weblogic.wsee.jaxrpc.soapfault.WLSOAPFaultException: MustUnderstand header not processed '{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security'
     at inventorymanager.InventoryManager_Stub.getItemPrice(InventoryManager_Stub.java:139)
     at inventorymanager.InventoryManagerPortClient.getItemPrice(InventoryManagerPortClient.java:180)
Please suggest me to resolve this..
Thanks in advance...

Is there any solution for this problem .??
Please suggest.Its Urgent!!!
Thanks..

Similar Messages

  • MustUnderstand Header not understood ERROR while invoking WS

    Hi All,
    I am using Jdeveloper 11.1.1.4 and i am using BPEL to invoke the webservice. The webservice is security enabled so i have followed these steps:-
    rightclick the service ->
    configure WS Policies->
    select oracle/wss_username_token_client_policy ->
    Define the csf key value in Override value property
    But while invoking the WebService it shows this error :-
    *<fault>*
    *<bpelFault>*
    *<faultType>0</faultType>*
    *<remoteFault>*
    *<part name="summary">*
    *<summary>MustUnderstand headers:[{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security] are not understood</summary>*
    *</part>*
    *<part name="detail">*
    *<detail>javax.xml.ws.soap.SOAPFaultException: MustUnderstand headers:[{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security] are not understood</detail>*
    *</part>*
    *<part name="code">*
    *<code>SOAP-ENV:MustUnderstand</code>*
    *</part>*
    *</remoteFault>*
    *</bpelFault>*
    *</fault>*
    How do i remove the MustUnderstand header from this security??
    Any Ideas or workaround??
    Please suggest !!
    Its urgent!!!
    Thanks.

    Is there any solution for this problem .??
    Please suggest.Its Urgent!!!
    Thanks..

  • Did not understand "MustUnderstand" header(s)

    Hi
    I trying to call a webservice hosted on weblogic 8.1 sp3 from a axis(1.3) based client. web service implements a oasis 2004 standard. while calling from a axis client i am getting following error
    Did not understand "MustUnderstand" header(s)
    at org.apache.axis.handlers.soap.MustUnderstandChecker.invoke(MustUnderstandChecker.java:96)
    Can anybody help on this issue.
    whereas when i see the soap response it does not show me any errors

    Yes,
    Actually, the I wrote the following java code and is working perfectly. If I am deploying the same code into java embedding activity i am facing the above issue. As per my knowledge the code is interrupting at m_session and returning null pointer exception.
    This is the original java code and is working perfectly if I run directly.
    public class MyClass implements MyInterface{
    public MyClass() {
    super();
    public String TestAPIMet(String c) {
    IAgileSession m_session=null;
    IAdmin admin=null;
    try {
    HashMap params = new HashMap();
    params.put(AgileSessionFactory.USERNAME, "XXXXX");
    params.put(AgileSessionFactory.PASSWORD, "XXXXXXX");
    AgileSessionFactory instance = AgileSessionFactory.getInstance("XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX");
    m_session = instance.createSession(params);
    System.out.println(m_session);
    admin = m_session.getAdminInstance();
    IAgileClass cls = admin.getAgileClass( "ProblemReport" );
    IServiceRequest psr = (IServiceRequest)m_session.createObject( "ProblemReport", c);
    } catch (APIException e) {
    e.printStackTrace();
    } finally{
    m_session.close();
    return "Problem Report Num" +c;
    }

  • Bill and Routing Interface concurrent program is not processing component

    Hi All,
    While running Bill and Routing Interface concurrent program,not processing the component item(Line Item) from bom_inventory_comps_interface,still it show process flag as 1. where as billing item(Header Item) is successfully done..i have tried with all the option by giving component sequence id and batch id still it is not processing component item..
    Is it recommendable to give bill sequence id and component sequence id in interface table using bom_inventory_component_s sequence..
    I will be very pleased if i listen from u guys... Plz help me in resolving this issue..
    My Header Insert Stmt..
    INSERT INTO apps.bom_bill_of_mtls_interface@system_link_visma
    (assembly_item_id, organization_id,
    alternate_bom_designator, last_update_date,
    last_updated_by, creation_date, created_by,
    revision, item_number, item_description,
    implementation_date, transaction_type,
    process_flag, assembly_type, batch_id
    VALUES (l_inv_item_id, l_org_id,
    NULL, SYSDATE,
    1318, SYSDATE, 1318,
    l_revision, l_item_num, l_description,
    SYSDATE, 'CREATE',
    1, 1, 10003535
    Component Insert stmt
    INSERT INTO apps.bom_inventory_comps_interface@system_link_visma
    (operation_seq_num, component_item_id,
    item_num, basis_type, component_quantity,
    auto_request_material, effectivity_date,
    disable_date, planning_factor,
    component_yield_factor,
    enforce_int_requirements,
    include_in_cost_rollup, wip_supply_type,
    supply_subinventory, supply_locator_id,
    check_atp, optional,
    mutually_exclusive_options,
    low_quantity, high_quantity,
    so_basis, shipping_allowed,
    include_on_ship_docs, required_to_ship,
    required_for_revenue, component_remarks,
    transaction_type, process_flag,
    assembly_item_id, component_item_number,
    batch_id,component_sequence_id
    VALUES (l_operation_seq, l_comp_item_id,
    cur2.item_sequence, l_basis, cur2.quantity,
    l_auto_request_mtls, cur2.from_date,
    cur2.TO_DATE, cur2.planning_factor,
    cur2.yield_factor,
    l_enforce_int_requirements,
    l_include_in_cost_rollup, l_supply_type,
    l_supply_subinventory, NULL,
    l_check_atp, l_optional,
    l_mutually_exclusive_options,
    cur2.minimum_quantity, cur2.maximum_quantity,
    l_sale_order_basis, l_shippable_flag,
    l_include_on_ship_docs, l_required_to_ship,
    l_required_for_revenue, cur2.comments,
    'CREATE', 1,
    l_inv_item_id, l_comp_item_num,
    10003535,apps.bom_inventory_components_s.nextval@system_link_visma
    For Subcomponent Insert Stmt
    INSERT INTO apps.bom_sub_comps_interface@system_link_visma
    (substitute_component_id,
    substitute_item_quantity,
    assembly_item_id, component_item_id,
    operation_seq_num, organization_id,
    substitute_comp_number,
    component_item_number,
    assembly_item_number,
    transaction_type, process_flag,
    enforce_int_requirements,
    effectivity_date,component_sequence_id,batch_id
    VALUES (l_sub_comp_item_id,
    cur3.quantity,
    l_inv_item_id, l_comp_item_id,
    cur2.operation_sequence, l_org_id,
    l_sub_comp_item_num,
    l_comp_item_num,
    l_item_num,
    'CREATE', 1,
    l_enforce_int_requirements,
    SYSDATE,apps.bom_inventory_components_s.currval@system_link_visma,10003535
    Thanks
    Raman Sharma
    Edited by: 929841 on May 4, 2012 12:28 AM
    Edited by: 929841 on May 4, 2012 2:58 AM

    You need to populate the organization_id or organization_code in bom_inventory_comps_interface.
    Here is a minimal insert
    INSERT INTO bom.bom_inventory_comps_interface
                (operation_seq_num, last_update_date, last_updated_by,
                 creation_date, created_by, process_flag, transaction_type,
                 bom_item_type,
                 effectivity_date, organization_code, assembly_item_number,
                 item_num, component_item_number, component_quantity
         VALUES (1                                                   -- op_seq_num
                 ,SYSDATE, 1433
                 ,SYSDATE, 1433, 1                                  -- process_flag
                 ,'Create',
                 4 -- bom_item_type 1 Model; 2 Option class; 3 Planning; 4 Standard; 5 Product family
                 ,SYSDATE - 1, 'PUB', 'SSGPARENT1'          -- assembly_item_number
                 ,10                                                     --item_num
                 , 'SSGCOMP1'                           -- component_item_number
                 , 10                                          --qty
                )Sandeep Gandhi

  • JSP 2.0 JSLT not processing defined context - Tomcat 5.0.19

    Hi,
    I am using tomcat 5 for my development. If I define the context in server.xml JSLT tag is not processing. I added jslt.jar and standard.jar file in common/lib direcotry.
    I have a simple jsp file p.jsp
    ======================
    p.jsp
    <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
    <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %>
    JSLT Test
    <br>
    <c:out value="${order.amount + 5}"/>
    Test 1.
    I copied p.jsp into webapps/ROOT directory and started tomcat. Using webbrowser I accessed p.jsp.
    http://localhost:8080/p.jsp
    output:
    ======
    JSLT Test
    ${order.amount + 5}
    Test 2:
    I created new context in server.xml file like this:
    <Context path="/test" docBase="test"
    debug="5" reloadable="true" crossContext="true"/>
    copied p.jsp file into webapps/test direcotry.
    I got the following ouput:
    http://localhost:8080/test/p.jsp
    output:
    ======
    JSLT Test
    ${order.amount + 5}
    Test 3.
    Now I didn't define any context. created directory under webapp/nocontext and copied p.jsp file.
    JSLT Test
    ${order.amount + 5}
    Any idea why this page isnot processing. I looking ouput
    JSLT Test 5.
    SR

    Hi all,
    Just I found the problem. My web.xml file header is worng. The following web.xml is working.
    <web-app xmlns="http://java.sun.com/xml/ns/j2ee"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee web-app_2_4.xsd"
    version="2.4">
    <display-name>Welcome to Tomcat</display-name>
    <description>
    Welcome to Tomcat
    </description>
    </web-app>
    http://localhost:8080/test/p.jsp
    output:
    ======
    JSLT Test
    5

  • How to configure header note in quotation through va22.

    how to configure header note in quotation through va22.
    i want a manually enter the data through va22..
    please give me idea step by step process how to solve
    <h4>We have not configured text determination because the there are no fixed rules mentioned by our client. Hence the users are keying in text based on some other characteristics. Once the quotation is final then they need to change the customer number in VA22 which is wiping off the texts. Now I need to fill that text box with the texts keyed in while creating the quotation. I can read the texts using the function module "READ_TEXT" but the question is how can I fill the textarea so that the user can modify the text before saving.</h4>

    Try creating dummy text id which will be part of text determination procedure @ Customers sales text & document header text.
    else, maintain the SO10-text against this text ID. whenver user createds quotation this text will be defaulted irrespective of the cusotmer.
          then based on if user want to modify the text they have to enter manually in this text id. print this text in header.
    Regards,
    Reazuddin MD

  • [Solved]PM: swap header not found attempting to suspend encrypted disk

    Hi all,
    I have my root partition set up as a dm-crypt encrypted volume, and I followed the instructions here to set up a swap file.
    I also followed these instructions to add the resume hook to my mkinitcpio.conf between encrypt and filesystems.
    I modified the /etc/default/grub file like so:
    GRUB_CMDLINE_LINUX_DEFAULT="cryptdevice=/dev/disk/by-uuid/0cb6b266-ce81-4b2f-9958-722c788c46ef:cryptroot cryptkey=/dev/disk/by-uuid/18868ab9-e0dd-4634-9f9e-69f3d3686d3f:ext2:/laptopkey root=/dev/mapper/cryptroot resume=/dev/mapper/cryptroot resume_offset=4952064"
    I determined the offset using the swap-offset tool provided by the uswsusp package.
    And I ran
    mkinitcpio -p linux
    and
    grub-mkconfig
    When I type "systemctl hibernate" the system begins to hibernate, then comes back up immediately. Journalctl -xn gives this:
    Jan 08 23:38:57 lefty-laptop systemd[1]: systemd-hibernate.service: main process exited, code=exited, status=1/FAILURE
    Jan 08 23:38:57 lefty-laptop systemd[1]: Failed to start Hibernate.
    -- Subject: Unit systemd-hibernate.service has failed
    -- Defined-By: systemd
    -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
    -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/be02cf6855d2428ba40df7e9d022f03d
    -- Unit systemd-hibernate.service has failed.
    -- The result is failed.
    Jan 08 23:38:57 lefty-laptop systemd[1]: Dependency failed for Hibernate.
    -- Subject: Unit hibernate.target has failed
    -- Defined-By: systemd
    -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
    -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/be02cf6855d2428ba40df7e9d022f03d
    -- Unit hibernate.target has failed.
    -- The result is dependency.
    Jan 08 23:38:57 lefty-laptop systemd[1]: Service sleep.target is not needed anymore. Stopping.
    Jan 08 23:38:57 lefty-laptop systemd[1]: Unit systemd-hibernate.service entered failed state.
    Jan 08 23:38:57 lefty-laptop systemd[1]: Stopping Sleep.
    -- Subject: Unit sleep.target has begun shutting down
    -- Defined-By: systemd
    -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
    -- Unit sleep.target has begun shutting down.
    Jan 08 23:38:57 lefty-laptop systemd[1]: Stopped target Sleep.
    -- Subject: Unit sleep.target has finished shutting down
    -- Defined-By: systemd
    -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
    -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/9d1aaa27d60140bd96365438aad20286
    -- Unit sleep.target has finished shutting down.
    Jan 08 23:38:58 lefty-laptop kernel: Freezing user space processes ... (elapsed 0.001 seconds) done.
    Jan 08 23:38:57 lefty-laptop wpa_actiond[1017]: Interface 'wlan0' lost connection to network 'braains'
    Jan 08 23:38:58 lefty-laptop kernel: PM: Marking nosave pages: [mem 0x0009d000-0x000fffff]
    Jan 08 23:38:58 lefty-laptop kernel: PM: Basic memory bitmaps created
    Jan 08 23:38:58 lefty-laptop kernel: PM: Preallocating image memory... done (allocated 294215 pages)
    Jan 08 23:38:58 lefty-laptop kernel: PM: Allocated 1176860 kbytes in 0.22 seconds (5349.36 MB/s)
    Jan 08 23:38:58 lefty-laptop kernel: Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.
    Jan 08 23:38:58 lefty-laptop kernel: e1000e 0000:00:19.0: setting latency timer to 64
    Then a bunch more stuff, then this:
    Jan 08 23:38:58 lefty-laptop kernel: PM: thaw of devices complete after 955.701 msecs
    Jan 08 23:38:58 lefty-laptop kernel: PM: writing image.
    Jan 08 23:38:58 lefty-laptop kernel: PM: Using 1 thread(s) for compression.
    PM: Compressing and saving image data (197765 pages)...
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 0%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 10%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 20%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 30%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 40%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 50%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 60%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 70%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 80%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 90%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving progress: 100%
    Jan 08 23:38:58 lefty-laptop kernel: PM: Image saving done.
    Jan 08 23:38:58 lefty-laptop kernel: PM: Wrote 791060 kbytes in 4.67 seconds (169.39 MB/s)
    Jan 08 23:38:58 lefty-laptop kernel: PM: S
    Jan 08 23:38:58 lefty-laptop kernel: PM: Swap header not found!
    Jan 08 23:38:58 lefty-laptop kernel: |
    Jan 08 23:38:58 lefty-laptop kernel: PM: Basic memory bitmaps freed
    Jan 08 23:38:58 lefty-laptop kernel: Restarting tasks ... done.
    Jan 08 23:38:58 lefty-laptop kernel: video LNXVIDEO:00: Restoring backlight state
    Jan 08 23:38:58 lefty-laptop laptop-mode[28617]: Laptop mode
    Jan 08 23:38:58 lefty-laptop laptop-mode[28618]: enabled, not active [unchanged]
    Jan 08 23:38:58 lefty-laptop laptop-mode[28622]: Laptop mode
    Jan 08 23:38:58 lefty-laptop laptop-mode[28623]: enabled, not active [unchanged]
    Perhaps the "PM: swap header not found" is not the primary error?
    Last edited by LeftyAce (2014-01-12 06:08:55)

    LeftyAce wrote:
    ROOKIE: which UUID do you use? If I list /dev/disk/by-uuid I get two results:
    total 0
    lrwxrwxrwx 1 root root 10 Jan 11 18:06 06f4c97e-fe09-45a8-a937-dedb6b18339b -> ../../dm-0
    lrwxrwxrwx 1 root root 10 Jan 11 18:06 a0eb0a15-6d5d-4df2-a0d3-6f7fb00885c6 -> ../../sda1
    I think one of them is the actual device, the other is the dm-crypt container? I guess I could try both...I'll report back.
    Usually I use the output of blkid to find out the correct uuid. The uuid to use for / or swap is the one pointing to one of /dev/dm*.
    LeftyAce wrote:For the record I am using resume=/dev/mapper/cryptroot. I think since the swapfile is a file not a block device in my case that's important. Since the encrypt hook runs first, that mapper volume is present and unlocked by the time resume comes around.
    In my case the same happens but for some reason it doesn't seem to work, and I've been procrastinating submitting a bug (I use hibernation very rarely and I know of a way that works).

  • Header not getting  displayed in the next page in the second table

    Dear Experts ,
    I have a query regarding Header not getting  displayed in the next page in the second table of the main window. .Let me elobrate the issue.
    I have a smartform in which there are  2 tables TABLE 1 and TABLE 2 in the smartform MAIN window. TABLE 1 is for pending PO and TABLE 2 is  for future delivery date P O separated by text in between.
    Now the header for both the tables and the data are getting displayed properly if the total output is in only one page. But If there are more entries in the TABLE 1 the Header for the TABLE 2 is not getting displayed. Header for TABLE 1 yet gets displayed properly in the next page in case of more entries.
    Only issue is that Header for TABLE 2 is not getting displayed in second page but it gets displayed if the entries are less in TABLE 1 and the total output is in one page .
    Please provide a elobrate solution on this problem as the requirement is urgent.
    Thanks,
    Regards,
    Sachin Hada

    Hi Sachin,
    you need to cteate two pages FIRST and NEXT.
    in first page --> FIRST
        next       --> NEXT
    in next page
      page---> NEXT
      next ---> NEXT.
    you copy the first page main window is the same in next page
    aslo
    I think help ful for you
    thanks & Regards
    BHUPAL.

  • Job getting cancelled for 'Chain not processed for date-2007031920070318' ?

    Hi All,
      We have 2 process chains running only on weekdays - one for sales and the other for billing. A cancelled job is created for one of the process chains with the job log message as 'Chain not processed for date-2007031920070318' for the not run days whereas the other does not have any issues though the same monitor job program runs for both. The difference is only in the variant but nothing much other than that its maintained for 2 different chains.
       We also debugged the monitor job program. It basically does nothing other than displaying messages depending on the status of the chain log.
    Please let me know your suggestions as to why it happens.
    Thanks,
    krishna

    it is also giving following error message:
    Caution: Program has changed
    Caution: At time of termination,  Active source code no longer available  and after above two statements its giving sign "?" before every statement in below manner:
    024300 ?     lit_ekpo_dummy1[] = it_ekpo[].                     
    024310 ?     DELETE lit_ekpo_dummy1 WHERE ebeln NE tab1-ebeln.  
    024320 ?     DESCRIBE TABLE lit_ekpo_dummy1 LINES count01.      
    024330 ?                                                        
    024340 ?     REFRESH lit_ematn_dummy[].                         
    024350 ?     lit_ematn_dummy[] = lit_ematn[].

  • OPP(Output Post Processor) not processing the report (XML Publisher)

    Hi,
    I have defined a concurrent program (XML Publisher report) then ran but failed with the errors below. I am running the report in Oracle E-Business Suite 11.5.10.2, thru a concurrent manager other than Standard Manager. My guess is that OPP(Output Post Processor) is not processing the request outputs coming from a different manager/work shift since requests ran thru Standard Managers are all OK.
    In the OAM(Oracle Applications Manager)-> OPP, there is only 1 process allocated for both Actual and Target. If we increase the number of processes, will it work?
    /home/app/oracle/prodcomn/temp/pasta19356_0.tmp:
    /home/app/oracle/prodcomn/temp/pasta19356_1.tmp: No such file or directory
    Pasta: Error: Print failed. Command=lp -c -dkonica4 /home/app/oracle/prodcomn/temp/pasta19356_1.tmp
    Pasta: Error: Check printCommand/ntPrintCommand in pasta.cfg
    Pasta: Error: Preprocess or Print command failed!!!
    Anybody who has experienced similar issue?
    Thanks in advance.
    Rownald

    Hello,
    Just an additional tests info. We have 2 concurrent managers which I think is affecting the XML report output - Standard Manager(running 24 hours) and a Warehouse manager (9am-4:15pm).
    When I run the report before or after the Warehouse manager workshift(9am-4:15pm), the output is fine - which means PDF is generated and Pasta printing is OK. However, when report is run between 9am-4:15pm, the report only shows XML output with an error in Pasta printing(above). I also found that re-opening the output(ran prior to Warehouse workshift) on the period between 9am-4:15pm would also result to just an XML output instead of the previous PDF.
    Anybody who has experienced similar issue like this? Any idea? The report is not directly defined as "inclusion" in the Warehouse manager, only the program calling it. Any effect of multiple concurrent managers in the XML Publisher output?
    Thanks in advance for your ideas..

  • HT5631 how do I verify my apple id? I can't sign in to it in mail and I can't make a new one because it just will not process I'm trying to set up iCloud between iPad and iPhone and having ALOT of difficulty pleAse help need it done by later on today!!!!!

    how do I verify my apple id? I can't sign in to it in mail and I can't make a new one because it just will not process I'm trying to set up iCloud between iPad and iPhone and having ALOT of difficulty pleAse help need it done by later on today!!!!!

    In order to use your Apple ID to create an iCloud account, the primary email address associated with the ID must first be verified.  To do this, Apple will send a verification email to your account and you must respond to the email by clicking the Verify Now link.  Make sure you are check the spam/junk folder as well as the inbox.  If it isn't there, go to https://appleid.apple.com, click Manage your Apple ID, sign in, click on Name, ID and Email addresses on the left, then to the right click Resend under your Primary Email Address to resend the verification email.

  • Could not process due to error: com.sap.aii.adapter.file.ftp.FTPEx: 550

    Hi Experts,
    We have many File to EDI scenarios wherein XI System pick up the XML and sent to customers via EDI. Recently we faced a problem so created a Back-up System (Production copy) and tested successfully. After sometime the messages were routed to this back-up system and when we notice it and then stopped the back-up system. All the messages that were routed to back-up system, we try to send the same messages from the actually Production system to our customers. Now the problem is XI system (Production system) is unable to pick these files and I check the communication monitoring and encountered the below error message.
    Could not process due to error: com.sap.aii.adapter.file.ftp.FTPEx: 550.550
    Can anyone let me know how to fix the issue or what needs to be done?
    Your help is highly appreciated.
    Regards
    Faisal

    Hi,
    It seems to be problem with permission of files. Please ask your basis to do following:
    1. Set the permissions to FTP User you are using as 777 rights(full access to read , write and delete)
    2.If you have access to PI server, try to telnet /connect to ftp using command prompt (open ftp .....) the FTP server form there, you should see the same error there , inform this to your network guys.
    3.Clear all the files places already in the ftp (take backup) and test afresh after permissions are set by basis team.
    Regards
    Aashish Sinha

  • How can i get pr header note data into po while creating a po using pr no

    Hi all,
    Can any one tell me how can i get the PR header note's data in PO when i am creating a PO using that PR.
    and what is the name of table in which the header note's data is copied.
    Thanks
    Edited by: Saurabh.Shrivastava on Jan 17, 2012 3:54 PM

    Hi,
    Unfortunately I have to tell you that the described behaviour is not
    an Error but SAP Standard behaviour.
    Please see note 448814 question 7.
    In SAP Standard it is not foreseen to take over the header text of a
    PREQ to a PO !
    You can try to use BADI ME_REQ_HEADER_TEXT .
    I have found a document which gives coding for Purchase Requisition Header Long Text using Badi - ME_PROCESS_REQ_CUST:
    http://wiki.sdn.sap.com/wiki/display/ABAP/PurchaseReq.HeaderLongTextusingBadi-ME_PROCESS_REQ_CUST
    I hope it can help you!
    Best Regards,
    Arminda Jack

  • Regarding Rough FISCAL YEAR VARIANT NOT PROCESSED

    Hi Experts,
    I am loading data from 2lis_03_um to 0IC_C03,While executing DTP i am getting error as Rogh FISCAL YEAR VARIANT NOT PROCESSED.
    Can anyone please suggest me how to resolve the issue.
    Thanks
    Laxman

    Hi,
    Use the settings here:
    transaction code SPRO
    SAP Netweaver
    Business Intelligence
    Settings for BI content
    Trande Industries
    Retailing
    => set fiscal year variant
    regards,
    pascal

  • Lync 2013 client not processing history spooler folder (conversation history)

    We have been using Lync 2013 fully patched up to latest & greatest, along with Exchange 2013 for about 6 months now & everything is working
    great. (I did a AD migration from our old parent company in the UK as well as migrated exchange from 2010 to 2013, all that without a hitch for the most part.)
    the issue that seems to be popping up intermittently is conversation history. I manged to work around some of them for the most part & getting
    them to process; with the exception of 1 user.
    If he logs into another PC his conversation history works fine.
    However on his PC it does not seem to want to process the history spooler folder in "C:\Users\User\AppData\Local\Micros­oft\Office\15.0\Lync\[email protected]­m",
    there are 737 files in there (all of his convo history)
    Now what i did & it seems to sort of work is,Signed out of Lync, I deleted the users Sip folder after backing up the history spooler folder, logged
    back into Lync which recreated the sip_user folder; logged back out of Lync , copied the History Spooler folder back to the SIP folder, & signed into Lync.
    This took 10 files out of the history spooler folder, & then appeared in the convo history; both on Lync client & Outlook; but did not bring
    all of them in. I did some testing and if you log out, then back in, it processes a few more. but not all.
    EWS & MAPI settings & all other Lync configuration information settings match a users with no convo history issues.
    So now the questions. 
    1)What could possibly cause what ever mechanism that processes the History spooler folder to stop/only process a few items during sign in? Also what
    is the mechanism? 
    2)Is there any way through powershell to force the synchronization of the history spooler folder?
    Thank you

    so in some more testing i had check the history spooler folder, & there were about 700+ files.
    as a note* (Upon deleting the users sip folder & logging back in thus re-creating the folder new convo's where saving. So i did "fix" the issue with no new convo's during my initial work prior to opening this thread.)
    I then removed a good chunk of the *.hist files & only left convo's from this month. signed out then back in, 10 files processed. then did it again, another 10. did it again, & it did 4 then stopped. I noticed there were some files over 1 mb. &
    remember coming across this post (Lync
    not processing files over 1mb) so i removed the 1mb files & then logged out then back in. processed 10 more. 
    i then added all the files to the history spooler file with the exception of the files over 1 mb, & logged in & out about 70 times to process the whole folder. Now all of the convos are in Lync & outlook as well as new convo's being saved. However
    i have a folder now with 130 files that are over 1mb that will not process at all..
    so now i am trying to find some validity on the statement in the link above 
    "We did open a case with Microsoft and they confirmed it is a bug in the Exchange EWS HTTP stack. It
    is supposedly under review by the Dev team, but my guess is that it won't be fix soon. The magic size is exactly 1MB or 1048576 bytes."
    If that is actually true it sucks & is some B.S as it is nearly 2 months later & this is obviously
    a major problem & not even a CU or PU has been released to address the issue.

Maybe you are looking for