Load OLAPTRAIN cube failed

Hi, I followed every steps at this link: http://www.oracle.com/technology/obe/olap_cube/buildicubes.htm
First, when I tried to mantain the TIME dimension and load data, I got the ORA-01843 error 'not a valid month'.
I'm italian, and I've found that the error was in the mapping window in the ALL_YEARS END_DATE field: instead of TO_DATE('2008-DEC-31', 'YYYY-MON-DD') I have to type DIC in the month. Afther that I can mantain TIME dimension and view all the data through AWM.
But when I try to manage the SALES_CUBE I continue getting this error:
NI: XOQ-01600: Error DML OLAP "ORA-01843: not a valid month
" during DML Execution "SYS.AWXML!R11_LOAD_MEASURES('SALES_CUBE.CUBE' SYS.AWXML!___R11_LONG_ARG_VALUE(SYS.AWXML!___R11_LONG_ARG_DIM 1) SYS.AWXML!___R11_LONG_ARG_VALUE(SYS.AWXML!___R11_LONG_ARG_DIM 2) SYS.AWXML!___R11_LONG_ARG_VALUE(SYS.AWXML!___R11_LONG_ARG_DIM 3) 'NO')", Generico in TxsOqStdFormCommand::execute
at oracle.olapi.data.source.DataProvider.callGeneric(Unknown Source)
at oracle.olapi.data.source.DataProvider.callGeneric(Unknown Source)
at oracle.olapi.data.source.DataProvider.executeBuild(Unknown Source)
at oracle.olap.awm.wizard.awbuild.UBuildWizardHelper$1.construct(Unknown Source)
at oracle.olap.awm.ui.SwingWorker$2.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
How can I solve this error?? anyone could help me?
Edited by: battle84 on 27-ago-2009 11.16

I have nls_language=italian, and my Windows XP is in italian language... i don't understand because as I said the TIME dimension with the DEC->DIC correction in dates works, but not the SALES_CUBE... Have I to check the data in the fact table?

Similar Messages

  • DTP load to cube failed with no errors

    Hi,
    I executed a DTP load from cube to cube with huge data volume approx 25 million records. It is a full load. The DTP load failed without any errors in any packages. The load went for 1 day and 33 mins and the request just turned red without any error messages. I have check ST22 and SM37 Job logs. No clue.
    I restarted this load and again it failed approximately around the sametime without any error messages. Please throw some light on this.
    Thanks

    Hi,
    Thanks for the response. There are no abnormal messages in the SM37 Job logs. These process is running fine the production. We are making a small enhancement.
    08/22/2010 15:36:01 Overlapping check with archived data areas for InfoProvider Cube                             RSDA          230          S
    08/22/2010 15:36:42 Processing of data package 000569 started                                                        RSBK          244          S
    08/22/2010 15:36:47 All records forwarded                                                                            RSM2          730          S
    08/22/2010 15:48:58 Overlapping check with archived data areas for InfoProvider Cube                            RSDA          230          S
    08/22/2010 15:49:48 Processing of data package 000573 started                                                        RSBK          244          S
    08/22/2010 15:49:53 All records forwarded                                                                            RSM2          730          S
    08/22/2010 16:01:25 Overlapping check with archived data areas for InfoProvider Cube                            RSDA          230          S
    08/22/2010 16:02:13 Processing of data package 000577 started                                                        RSBK          244          S
    08/22/2010 16:02:18 All records forwarded                                                                            RSM2          730          S
    08/22/2010 16:10:46 Overlapping check with archived data areas for InfoProvider Cube                            RSDA          230          S
    08/22/2010 16:11:12 Job finished                                                                                00           517          S
    As shown here The process ended in 577 and 578 package did not start.  I can do be executing multiple DTPs but this process is running fine in production. There are no error messages or short dumps to start troubleshooting.
    Thanks,
    Mathi

  • Data load from DSO to cube fails

    Hi Gurus,
    The data loads have failed last night and when I try to dig inside the process chains I find out that the cube which should be loaded from DSO is not getting loaded.
    The DSO has been loaded without errors from ECC.
    Error Message say u201DThe unit/currency u2018source currency 0currencyu2019 with the value u2018spaceu2019 is assigned to the keyfigure u2018source keyfigure.ZARAMTu2019 u201D
    I looked in the  PSA has about 50, 000 records .
    and all data packages have green light and all amounts have 0currency assigned
    I went in to the DTP and looked at the error stack it has nothing in it then I  changed the error handling option from u2018 no update,no reportingu2019 to u2018valid records update, no reporting (resquest red)u2019 and executed then the error stactk showed 101 records
    The ZARAMT filed has 0currency blank for all these records
    I tried to assign USD to them and  the changes were saved and I tried to execute again but then the message says that the request ID should be repaired or deleted before the execution. I tried to repair it says can not be repaired so I deleted it and executed . it fails and the error stack still show 101 records.when I look at the records the changes made do not exist anymore.
    If I delete the request ID before changing and try to save the changes then they donu2019t get saved.
    What should I do to resolve the issue.
    thanks
    Prasad

    Hi Prasad....
    Error Stack is request specific.....Once you delete the request from the target the data in error stach will also get deleted....
    Actually....in this case what you are suppose to do is :
    1) Change the error handling option to u2018valid records update, no reporting (resquest red)u2019 (As you have already done....) and execute the DTP......all the erroneous records will get sccumulated in the error stack...
    2) Then correct the erroneos records in the Error Stack..
    3) Then in the DTP----in the "Update tab"....you will get the option "Error DTP".......if it is already not created you will get the option as "Create Error DT".............click there and execute the Error DTP..........The Error DTP will fetch the records from the Error Stack and will create a new request in the target....
    4) Then manually change the status of the original request to Green....
    But did you check why the value of this field is blank....if these records again come as apart of delta or full then your load will again fail.....chcek in the source system and fix it...for permanent solution..
    Regards,
    Debjani......

  • Loading from cube to same cube failing

    Hi All,
    we are facing a situation wherein we have to load some data from  a custom cube to the same cube with some selections.
    But we are facing problem while loading it is showing error - no active update rules exists but I have already activated the update rules. the same thing in production system is working fine but in dev system it is throwing error .
    Also in sm58 the trfc's are not getting processed.
    in sm37 , the datapackets are having ARFCSTATE = SYSFAIL .
    I have done the following checks:-
    1)the extractor is working fine with the selections .
    2) Loaded into PSA an then tried to load one by one data packet but failed.
    3) again activated the update rules and tried to load , again failed and the same thing as mentioned above is happening.
    4) in sm58 , executed f6 , the data packets were processed butb then when i run the load again it failed and the data packets were again in 'transaction executing " state .
    Kindlu let me know if there is a way out .
    Regards,
    Dola

    Hi Gareth,
    Many thanks for the response.
    Firstly , yes the same update rules etc are working fine in production expect for 1 thing , in production after every 10 days , we compress the cube . but in dev we are loading it freshly , so no compression or E table is there.
    Secondly, the status of the update rules are "active , executable". the colour is green .
    thirdly , we are using BW 3.5 , so we are loading thru infopackage .
    forthly , It is actually a full load , so when i schedule the load , it actually hangs ( goes on and on and on ) . in sm58 , i can see trfc related to the load in "executable " state for more than 12 hrs ,. after that it goes to "time limit excceded" state . So now when i click on individual trfc , and press f6 , it disappears without any shourt dump.
    And in the details tab in RSMO (data load) , under " Processing data packets "  -  under "update rules" -  
    " start of processing update rules for data target " till this message it i sgreen but after this for update rules we are getting "missing messages :update rules finished for data target".
    I have checked the update rules . it is same as the production ones .
    Any idea as why this is behaving like this .
    Regards,
    Dola

  • Delta load from ODS to cube failed - Data mismatch

    Hi all
    We have a scenario where the data flow is like
    R/3 table - >dataSrc -- > pSA - >InfoSrc -> ODS ->Cube.
    The cube has an additional field called "monthly version"
    and since it is a history cube , it is supposed to hold data
    snapshots of the all data in current cube for each month .
    We are facing the problem that the Data for the current month
    is there in the history ODS but not in the cube . In the ODS ->Manage->requests
    tab i can see only 1 red request that too with 0 recs.
    However ,In the cube -> manage-> reconstruction tab , i can see 2 Red request
    with the current month date. Could these red requests be the reason for
    data mismatch in ODS and cube .
    Please guide as to how can i solve this problem .
    thanks all
    annie

    Hi
    Thanks for the reply.
    Load to Cube is Delta and goes daily .
    Load to ODS is a full on daily basis .
    Can you help me how to sort this issue . I have to work directly in production env . so has to be safe and full proof .
    Thanks
    annie

  • Cube failed with invalid character stics

    Hi,
    My cube failed with invalid charactersticts....data flow is source sytem to ODS and then cube.
    loading in to ODS and activation is also successful . I would like to know the reason why ODS loading & activation is not failed with above said message and why cube is failed...
    Thanks in advance.....CK

    Hi,
    BW accepts just capital letters and certain characters. The permitted characters list can be seen via transaction RSKC.
    There are several ways to solve this problem:
    1)     Removing erroneous character from R/3 (for example required vendor number that need to be changed can be found from PSA from line shown in error message)
    2)     Changing or removing character in update rules (need to done by ABAP)
    3)     Putting character to BW permitted characters, if character is really needed in BW
    4)     If the bad character only happens once then it can be directly change/removed by editing the PSA
    5)     Put ALL_CAPITAL in permitted characters. Needs to be tested first!
    Editing and updating from PSA, first ensure that the load has been loaded in PSA, then delete the request from the data target, edit PSA by double clicking the field you wish to change and save. Do not mark the line and press change this will result in incorrect data. After you have corrected the PSA, right click on the not yet loaded PSA and choose u201Cstart immediately.u201D
    Hope it will help you.
    Regards,

  • Process Chain: Delta to Cube Fail

    Hi All, I am using a process chain which has the following sequence: Start>>Delete Data in ODS>>Full load to ODS>>Delta to CUBE
    Before running the process chain I manually loaded ODS & did a manual init to Cube. Then tried to run process chain, it fail while loading to cube, with a message that "Check: No successful initialization of the delta update took place". I checked before running the process chain, the schedules has the entry for the init request. I guess when the ODS data is deleted the init entry from the schduler get deleted.
    The reason I delete ODS is because each day full loads keep getting accumulated & ODS gets fragmented. This was affecting the reporting performance. Now since I have deleted the old full loads out of ODS, reports run faster. But since I delete ODS, I lose the init request for the cube. Is there a work around, to do delete the full loads out of ODS & still do delta to the cubes?
    We are on BI 7.0 sp 13. kindly help
    thanks
    PK

    Are you following the Old School Method in 7.0 using Infopackages to load data from ODS to Cube ?
    As you are deleting all data in ODS before loading - Delta Capability/Changelog is not being used ...did I miss something here ???
    If you are loading new full loads to ODS everytime and updating the same to Cube is the data not doubling up in the Cube ????
    If thats not it and you want only the new/latest request to be loaded to the Cube use Deleting Overlapping Request feature for Cube.
    If you have been using DTP's Delta DTP to Cube will surely work as Init is not reqd for a Delta DTP.
    Lets wait for some experts advises...

  • BI Loading to Cube Manually with out creating Indexes.

    BW 3.5
    I have a  process chain schedules overnight which loads data from the InfoCubes from the ODS after loading to the staging and transformation layer
    The data loaded into the InfoCube is scheduled in the process chain as
    delete Index > delete contents of the cube> Load Data to the Cube --> Create Index.
    Tha above process chain load to cube normally takes 5 - 6 hrs.
    The only concern I have is at times if the process chain fails at the staging layer and transformation layer then I have to rectify the same manually.
    After rectifying the error, now I have to load the data to the Cube.
    I have only left with couple of hours say 2-3 hrs to complete the process chain of the load to the cube because of business hours.
    Kindly let me know in the above case where I have short of time to load data to the cube via process chain
    can I manually delete the contents of the cube and load the data to the cube. Here I will not be deleting the existing index(s) and create Index(s) after loading to the Cube because creation of Index normally takes a long time which I can avoid where I am short of time.
    Can I do the above at times and what are the impacts
    If the load to the InfoCube schedules via process chain the other normal working days. Is it going to fail or it will go through
    Also deleting contents of the cubes deletes the indexes.
    Thanks
    Note: As far I understand that Index are created to improve the performance at loading and query performance level.
    your input will be appreciated.

    Hi Pawan,
    Please find below my views in bold
    BW 3.5
    I have a process chain schedules overnight which loads data to the InfoCubes from the ODS after loading to the staging and transformation layer
    The data loaded into the InfoCube is scheduled in the process chain as
    delete Index > delete contents of the cube> Load Data to the Cube --> Create Index.
    I assume you are deleting the entire contents of the cube. If this is the normal pattern of loads to this cube and if there are no other loads to this cube you may consider configuring a setting in the infocube which " Delete InfoCube indexes before each data load and then refresh" .This setting you would find in Performance tab in create index batch option. Read F1 help of the checkbox. It will provide with more info.
    Tha above process chain load to cube normally takes 5 - 6 hrs.
    The only concern I have is at times if the process chain fails at the staging layer and transformation layer then I have to rectify the same manually.
    After rectifying the error, now I have to load the data to the Cube.
    I have only left with couple of hours say 2-3 hrs to complete the process chain of the load to the cube because of business hours.
    Kindly let me know in the above case where I have short of time to load data to the cube via process chain
    can I manually delete the contents of the cube and load the data to the cube. YES, you can Here I will not be deleting the existing index(s) and create Index(s) after loading to the Cube because creation of Index normally takes a long time which I can avoid where I am short of time.
    Can I do the above at times and what are the impacts Impacts :Lower query performance and loading performance as you mentioned
    If the load to the InfoCube schedules via process chain the other normal working days. Is it going to fail or it will go through
    I dont probably understand the question above, but i assume you mean, that if you did a manual load will there be a failure next day. - THERE WOULDNT
    Also deleting contents of the cubes deletes the indexes.
    YES it does
    Thanks
    Pavan - You can skip creating indices, but you will have slower query performance. However if you have no further loads to this cube, you could create your indices during business hours as well. I think, the building of indices demands a lock on the cube and since you are not loading anything else u should b able to furnish it. Lastly, is there no way you can remodel this cube and flow...do you really need to have full data loads?
    Note: As far I understand that Index are created to improve the performance at loading and query performance level. TRUE
    your input will be appreciated.
    Hope it helps,
    Regards,
    Sunmit.

  • Data not loading to cube

    I am loading a cube from a cube.  There is data which meets my info package selection criteria in the source cube but I get 0 records transferred, added to the cube.  There is no delete in start routines.
    Just before loading I have selectively deleted all records from the target cube so as to not disturb existing delta mechanism
    The load I am making is full repair
    Can someone assist
    Thanks

    Hi Akreddy
    I am not sure "Repair full to Cube"...!
    Still the following is my opninon abt your issue...
    It seems there is some  miss map ..Just one quick check in the DTP monitor in which step your getting 0 records . Is it Data package level or transformation level or routine level..
    Identify the step where the transfered records is nullifying then dig through the step..or post the waning or message in this forum so that its pretty easier to get answered..
    Hope its clear a little..!
    Thanks
    K M R
    "Impossible Means I 'M Possible"
    Winners Don't Do Different things,They Do things Differently...!.
    >
    akreddy wrote:
    > I am loading a cube from a cube.  There is data which meets my info package selection criteria in the source cube but I get 0 records transferred, added to the cube.  There is no delete in start routines.
    >
    > Just before loading I have selectively deleted all records from the target cube so as to not disturb existing delta mechanism
    >
    > The load I am making is full repair
    >
    > Can someone assist
    >
    > Thanks
    Edited by: K M R on Aug 16, 2010 2:01 PM

  • Loadihng from ODS to Cube failed in process chain

    Hi All,
    I have a process chain for SD loads scheduled every night. It gave me an error while loading data from Billing ODS to Billing Cube(via Further Processing, and the data is not available in PSA).
    The error it gave is as follows:
    Error while updating data from DataStore object ZSD_O03
    Error 7 when sending an IDoc
    Time limit exceeded. 3
    Time limit exceeded.     
    Errors in source system
    Now the request is loaded in Cube(YELLOW status) with transfered records 87125 and added 62245. But there is no way by which we can check if the complete data has come to the cube.
    i did not delete the previous request from the cube  and tried to repeat the process chain from the same point...it then gave me the following error:
    -Last delta incorrect. A repeat must be requested. This is not possible.
    Please tell me now what should I do...i don't have to miss the delta records in the cube.
    1)Should I delete the prevous loaded request in cube and repeat the process chain from the same point...will it get all the delta records in that case???
    2)Should I delete the request from cube and manually repeat the last delta for the cube. and then manually load the remaining process chain.
    please help me out soon..
    Thanks and Regards,

    Hello Niyati
    you can use 2nd option
    Your error message is comming because of lack of BGD process and memory....just set the status of request in cube to red, delete it, and run a repeat load from infopackage.....
    If no. of records are high than just go for the option of "load data in PSA and than subsequently in data target"
    in infopackage
    Thanks
    Tripple k

  • Error Message When Loading a Cube

    Hello - I received the following error message when I attempted to load a cube:
    No SID found for value '0.000' of characteristic 0CURRENCY
    Is anyone familiar with this error message?  Also, I tested this load in DEV before I moved the cube into QA but I did not receive the error message in DEV.  Any help would be greatly appreciated, thanks.

    Hi,
    I had a similar error just a few days ago. I could solve it by replicating the datasources again and activating the transfer structures via rs_transtru_activate_all. After that the load went fine.
    regards
    Siggi

  • I have elements 7 loaded on my laptop. I bought a new one and want to load it onto the new one. I talked to someone on adobe site last night and he gave me a link to down load it. it failed so i googled it, and the downloaded it just fine. I went on to th

    i have elements 7 loaded on my laptop. I bought a new one and want to load it onto the new one. I talked to someone on adobe site last night and he gave me a link to down load it. it failed so i googled it, and the downloaded it just fine. I went on to the adobe site signed in and looked at my product history and got the product #, however, when i entered the product # i it didn't likeit. What do i do?

    Is your laptop running Windows XP, windows 7 or Windows 8/8.1?  If so then I suggest download a tool that will retrieve the serial number from the old laptop which you can use it on your new machine.  The tool is called:
    Belarc Advisor
    It can be downloaded from here:
    <http://www.belarc.com/Programs/advisorinstaller.exe>
    Install this and then run it by double clicking on its icon and then wait for about 5 minutes while it generates all the info about your machine and software keys.

  • All records loaded to cube successfully but the request is in red status

    All the records are loaded from r3 to target, but the status of the request is still in RED, what is the reason and how to resolve it?

    Hi,
    If you find the records are loaded to cube and request is still red, Follow any one of the following options.
    1. Make the request to green manually
    2. Check the extraction monitor settings in SPRO -
    > SAP Customizing Implementation Guide  ---> Business Intelligence  --->  
        Automated Processes  -
    > Extraction Monitor Settings. Check how the settings are done if you are authorized.
    3. Go to Manage of the target ---> Environment ---> Automatic Request processing ---> Check the option Set Quality Status to OK
    Hope this helps,
    Regards,
    Rama Murthy.
    Edited by: Rama Murthy Pvss on May 11, 2010 7:39 AM

  • OCS on a cluster with Load balancing and fail safe environment

    Dear all,
    i want to ask is there any documat or hints on how to do an OCS R2 installaiotn on 3 server with RAC option (clustered Fail Safe), how can i install OCS on a cluster with Load balancing and fail safe environment.
    plz i need ur help
    thanking u
    [email protected]

    Dear all,
    i want to ask is there any documat or hints on how to do an OCS R2 installaiotn on 3 server with RAC option (clustered Fail Safe), how can i install OCS on a cluster with Load balancing and fail safe environment.
    plz i need ur help
    thanking u
    [email protected]

  • WLS6.1sp1 stateful EJB problem =   load-balancing and fail over

              I have three problem
              1. I have 2 clustered server. my weblogic-ejb-jar.xml is here
              <?xml version="1.0"?>
              <!DOCTYPE weblogic-ejb-jar PUBLIC '-//BEA Systems, Inc.//DTD WebLogic 6.0.0 EJB//EN'
              'http://www.bea.com/servers/wls600/dtd/weblogic-ejb-jar.dtd'>
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
                   <ejb-name>DBStatefulEJB</ejb-name>
                   <stateful-session-descriptor>
                   <stateful-session-cache>
                        <max-beans-in-cache>100</max-beans-in-cache>
                        <idle-timeout-seconds>120</idle-timeout-seconds>
                   </stateful-session-cache>
                   <stateful-session-clustering>
                        <home-is-clusterable>true</home-is-clusterable>
                        <home-load-algorithm>RoundRobin</home-load-algorithm>
                        <home-call-router-class-name>common.QARouter</home-call-router-class-name>
                        <replication-type>InMemory</replication-type>
                   </stateful-session-clustering>
                   </stateful-session-descriptor>
                   <jndi-name>com.daou.EJBS.solutions.DBStatefulBean</jndi-name>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              when i use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              and deploy this ejb, exception cause
              <Warning> <Dispatcher> <RuntimeException thrown b
              y rmi server: 'weblogic.rmi.cluster.ReplicaAwareServerRef@9 - jvmid: '2903098842
              594628659S:203.231.15.167:[5001,5001,5002,5002,5001,5002,-1]:mydomain:cluster1',
              oid: '9', implementation: 'weblogic.jndi.internal.RootNamingNode@5f39bc''
              java.lang.IllegalArgumentException: Failed to instantiate weblogic.rmi.cluster.B
              asicReplicaHandler due to java.lang.reflect.InvocationTargetException
              at weblogic.rmi.cluster.ReplicaAwareInfo.instantiate(ReplicaAwareInfo.ja
              va:185)
              at weblogic.rmi.cluster.ReplicaAwareInfo.getReplicaHandler(ReplicaAwareI
              nfo.java:105)
              at weblogic.rmi.cluster.ReplicaAwareRemoteRef.initialize(ReplicaAwareRem
              oteRef.java:79)
              at weblogic.rmi.cluster.ClusterableRemoteRef.initialize(ClusterableRemot
              eRef.java:28)
              at weblogic.rmi.cluster.ClusterableRemoteObject.initializeRef(Clusterabl
              eRemoteObject.java:255)
              at weblogic.rmi.cluster.ClusterableRemoteObject.onBind(ClusterableRemote
              Object.java:149)
              at weblogic.jndi.internal.BasicNamingNode.rebindHere(BasicNamingNode.jav
              a:392)
              at weblogic.jndi.internal.ServerNamingNode.rebindHere(ServerNamingNode.j
              ava:142)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              2)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.RootNamingNode_WLSkel.invoke(Unknown Source)
              at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              So do i must use it or not???
              2. When i don't use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              , there's no exception
              but load balancing does not happen. According to the document , there's must load
              balancing when i call home.create() method.
              my client program goes here
                   DBStateful the_ejb1 = (DBStateful) PortableRemoteObject.narrow(home.create(),
              DBStateful.class);
                   DBStateful the_ejb2 = (DBStateful) PortableRemoteObject.narrow(home.create(3),
              DBStateful.class);
              the result is like that
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@acf6e)/398
                   or
                   the_ejb1 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@252fdf)/380
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
                   I think the result should be like under one... isn't it??
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
              In this case i think the_ejb1 and the_ejb2 must have instance in different cluster
              server
              but they go to one server .
              3. If i don't use      "<home-call-router-class-name>common.QARouter</home-call-router-class-name>",
              "<replication-type>InMemory</replication-type>" then load balancing happen but
              there's no fail-over
              So how can i get load-balancing and fail over together??
              

              I have three problem
              1. I have 2 clustered server. my weblogic-ejb-jar.xml is here
              <?xml version="1.0"?>
              <!DOCTYPE weblogic-ejb-jar PUBLIC '-//BEA Systems, Inc.//DTD WebLogic 6.0.0 EJB//EN'
              'http://www.bea.com/servers/wls600/dtd/weblogic-ejb-jar.dtd'>
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
                   <ejb-name>DBStatefulEJB</ejb-name>
                   <stateful-session-descriptor>
                   <stateful-session-cache>
                        <max-beans-in-cache>100</max-beans-in-cache>
                        <idle-timeout-seconds>120</idle-timeout-seconds>
                   </stateful-session-cache>
                   <stateful-session-clustering>
                        <home-is-clusterable>true</home-is-clusterable>
                        <home-load-algorithm>RoundRobin</home-load-algorithm>
                        <home-call-router-class-name>common.QARouter</home-call-router-class-name>
                        <replication-type>InMemory</replication-type>
                   </stateful-session-clustering>
                   </stateful-session-descriptor>
                   <jndi-name>com.daou.EJBS.solutions.DBStatefulBean</jndi-name>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              when i use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              and deploy this ejb, exception cause
              <Warning> <Dispatcher> <RuntimeException thrown b
              y rmi server: 'weblogic.rmi.cluster.ReplicaAwareServerRef@9 - jvmid: '2903098842
              594628659S:203.231.15.167:[5001,5001,5002,5002,5001,5002,-1]:mydomain:cluster1',
              oid: '9', implementation: 'weblogic.jndi.internal.RootNamingNode@5f39bc''
              java.lang.IllegalArgumentException: Failed to instantiate weblogic.rmi.cluster.B
              asicReplicaHandler due to java.lang.reflect.InvocationTargetException
              at weblogic.rmi.cluster.ReplicaAwareInfo.instantiate(ReplicaAwareInfo.ja
              va:185)
              at weblogic.rmi.cluster.ReplicaAwareInfo.getReplicaHandler(ReplicaAwareI
              nfo.java:105)
              at weblogic.rmi.cluster.ReplicaAwareRemoteRef.initialize(ReplicaAwareRem
              oteRef.java:79)
              at weblogic.rmi.cluster.ClusterableRemoteRef.initialize(ClusterableRemot
              eRef.java:28)
              at weblogic.rmi.cluster.ClusterableRemoteObject.initializeRef(Clusterabl
              eRemoteObject.java:255)
              at weblogic.rmi.cluster.ClusterableRemoteObject.onBind(ClusterableRemote
              Object.java:149)
              at weblogic.jndi.internal.BasicNamingNode.rebindHere(BasicNamingNode.jav
              a:392)
              at weblogic.jndi.internal.ServerNamingNode.rebindHere(ServerNamingNode.j
              ava:142)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              2)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.RootNamingNode_WLSkel.invoke(Unknown Source)
              at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              So do i must use it or not???
              2. When i don't use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              , there's no exception
              but load balancing does not happen. According to the document , there's must load
              balancing when i call home.create() method.
              my client program goes here
                   DBStateful the_ejb1 = (DBStateful) PortableRemoteObject.narrow(home.create(),
              DBStateful.class);
                   DBStateful the_ejb2 = (DBStateful) PortableRemoteObject.narrow(home.create(3),
              DBStateful.class);
              the result is like that
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@acf6e)/398
                   or
                   the_ejb1 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@252fdf)/380
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
                   I think the result should be like under one... isn't it??
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
              In this case i think the_ejb1 and the_ejb2 must have instance in different cluster
              server
              but they go to one server .
              3. If i don't use      "<home-call-router-class-name>common.QARouter</home-call-router-class-name>",
              "<replication-type>InMemory</replication-type>" then load balancing happen but
              there's no fail-over
              So how can i get load-balancing and fail over together??
              

Maybe you are looking for

  • Opening and closing Access database in a loop causes an Error.

    I am loading test conditions from an Access DB in a multiple nested loop. The loops successively drill into the DB. ei Temperature, Humidity, Power. Consequently the DB is opened and closed numerous (2000) times. The Errors returned are(-2147467259)

  • OEM agent taking too much db time

    Hi, In my db, the OEMAgent was always the top consumer of CPU and IO times (because of the dynamic sql code?). The violating SP is attached at the end. For some reason, it is getting called thousands of times. Is this normally? What is the reason? Ho

  • ISE High availiblity issue

    hi, There is replication issue between primary and standby ISE node.Once check logs on both node the certificate was expired.There is self sign certificate installted on both devices Any body advise me how i can renew self signed certificate on both

  • Can't watch any video on my Curve 8530!

    I've looked around recently to try to find an answer to why my BlackBerry Curve 8530 will not play videos, and nothing I've read seems to work. When try to play a YouTube video, one of the following 3 errors occurs: 1) "The device media processor is

  • CONNECT AUXILIARY failing (db names confusion)

    Hello, For the purpose of creating Physical standby Database, I am having connect to AUXILIARY DB issue. I think I am confused between PRIMARY and STANDBY DB names. Please have a look at the configuration and suggest where  I am doing mistake. Thanks