Dependency between a datasource and a J2EE application, possible?

We have a J2EE application which makes use of a datasource. The application on startup does some database related activities using this datasource. As a result we need to make sure that the datasource should be started before our application starts. Is there some kind of dependency that we can add, through which we can tell NW to start the datasource first and then the application? We are using NW 04s SP11.

Just to bring this topic to a logical end  - we were able to successfully deploy the application with the data-sources.xml file within the META-INF folder. The dependency is now taken care off.
Thanks.

Similar Messages

  • Problem in creating the trasformation between my DataSource and InfoCube

    Hi All,
    when i am trying to create the trasformation between my DataSource and InfoCube through "Create Trasformation"...in the source of trasformation area when i am giving my DataSource 0CO-OM_CCA_1 and in the Source System T90CLNT090. system is showing the message "Source RSDS 0CO_OM_CCA_1 T90CLNT090 not active".......what to do ???.....then I went and check the monitoring tab there I got the system message that I am attaching belowu2026u2026Please provide the solution..........previously when i was doing that time there was no problem, but i was unable to transfer the data from my PSA to Infocube. So i have deleted the transfer rule and every thing and then when i am trying again i am getting this error.....so plz advise. The system message isu2026..
    u201CActive version of emulated 3.x DataSource 0CO_OM_CCA_1(T90CLNT090) cannot be displayed
    Message no. RSDS170
    Diagnosis
    The A version of the emulated 3.x DataSource cannot be displayed because no active transfer structure with PSA is available.
    Procedure
    Create a mapping between the 3.x DataSource and a 3.x InfoSource and activate the transfer rules. Select PSA as the transfer method in the 3.x InfoSource.
    Alternatively, you can migrate the 3.x DataSource in a DataSource to create an active version.u201D

    when u deleted tranfer rule , psa for the DataSource 0CO-OM_CCA_1 is no longer available..
    since u plan to use transformation, its better to migrate the DataSource 0CO-OM_CCA_1 to 7.0 version...
    just select the datasource and right click.. migrate to 7.0... once its migrated, u can create the transformation..also create info package and DTP for loading data to cube...
    if u don't want to migrate to 7.x .. then create infosource .. transfer rule and update rule..

  • Diffrence in data between the datasource and planning book

    Hi All,
    I have loaded the historical data from an infocube into my planning area then i generated the forecast. In our planning book we have provided some keyfigs like market intelligence etc.,. i have written a macro to add the baseline forecast and market intelligence and saved into a seperate keyfigure, i have defined the macro as a default macro and the macro execution is at "details only". everything is working fine in my planning book.
    After extracting the data from my planning area into a datasource, the data is not matching between planning book and the datasource <b>only for the keyfigures which i have written macros.</b>
    Please let me know where i have gone wrong and what are other settings to be made to replicate the same data into my datasource.

    Hi,
    Please note that system never execute the macro while extracting data from planning area to infocube.If you want that data to be saved it is
    necessary to copy the values to a "non-calculated" liveCache keyfigure.
    Each time, you will reenter in the planning book, the new value
    calculated will be displayed, it won't be the real current value
    from the livecache, it is not inside until you saved.
    If you don't save this result, this  is only visible on the grid but not
    in the planning area and livecache.
    Everytime planning book is opened the values for keyfigures which is calculated from macro are re-calculated. If you save these values and
    immediately afterwards extract the data, those numbers will be the
    same. However, maintaining new values in the planning book without
    saving would result in "differences" ...
    A DEFAULT macro is also launched while starting the Planning book.
    Please read carefully the note #674238.
    Regards,
    Sunitha.

  • Dependancy between central swcv and others

    Hi,
    I want to download all my RFCs and IDoc into one single central SWCV. I want to implement dependancy between all my SWCVs and Central SWCV. Can somebody please send me any document on that?. Is it run-time or build time, I need to implement Dependancy.
    Thanks,
    sunita.

    Hi Sunita,
    Check this blog from Michal-
    /people/michal.krawczyk2/blog/2005/08/26/xi-are-you-independentfrom-your-dependencies
    You need to create the dependency in the build time itself
    SAP help-
    http://help.sap.com/saphelp_nw2004s/helpdata/en/79/69f9e32bbb9f41aa4043c4c4989a41/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/10/8b9c4f1c79024595308d2f4a779c5e/content.htm
    Regards,
    Moorthy
    Message was edited by: Krishna Moorthy P

  • How to create and run J2EE application client for Hello World EJB

    Hi
    I am new to NWDS EJB deployment.I have created a "Hello World" bean. But how to deploy it and run using a J2EE application client step by step.
    Also please help in the steps of the deploy tool.
    Thanks in Advance

    Hi Ananda
    Check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/55/600e64a6793d419b82a3b7d59512a5/content.htm
    cheers
    Sameer

  • Relation between LIS datasources and Extract Structures

    Roberto,
    Your detailed information does make it very clear why the SAP creates 2 SnnBIW1, SnnnBIW2 tables, and a structure.
    I have another question related to this process.
    Once you create a datasource from the custom LIS structures, you can assign a Development class to that datasource.
    But the automatically created 2 tables and the structure are local objects, and you cannot reassign them to other object. That is fine because when you export/import the transport, all the other objects are also transported.
    But when we enhance the extract structures, it creates a structure ZASnnnBIWS which also HAS TO BE a local object. This creates a problem of how do we transport these changes?
    I would appreciate your insight into this.
    Thanks
    InduBala

    I check from LBWE maintenance structure column, and the fields are there on the left hand side for the selection criteria.
    How would I check "..<i>fields you have enhaced are ready to use</i>...." from LBWE.
    From RSA6, I would doubleclick on the LIS datasource, and I see no enhanced fields and therefore could not see if they are hidden.
    "<i>Did you check in LBWE if your strcutures have the fields ready to use?" </i>How would I do this? Maybe this is the problem.
    Thanks
    Message was edited by:
            NoNoNo ...

  • Swapping drives between Ultra 5 and 10 is it possible?

    I currently am faced with a problem. I have 2 machines running Solaris 7. One is an ultra 10 and the other is an ultra 5. The Ultra 10 is currently running off of an external scsi drive. The Ultra 5 is currently running off of an internal IDE drive. The utra 10 is used as a basic workstation. The ultra 5 is used as a software simulator. I am tasked with having to switch these two machines. I have tried just swapping the hard drives. However this does not work. Is there a reason why? Is there a fix? I thought they were the same architechure so I was assuming you could just swap the drives. If you have any info please let me know. I would greatly appreciate it.
    Thanks,
    Jason

    Ultra 5 and Ultra 10 is exactly the same machine but in different boxes. So they uses exactly the same motherboard.
    Of course there are a couple of different motherboards with different revisions and there are also multiple CPUs available for the Ultra 5 / Ultra 10 (the speed on a Ultra 5/10 CPU can range from 200-something to 440, if i'm not mistaken).
    Hence, to clarify; all Ultra 5 and Ultra 10 have an IDE controller.
    If your Ultra 10 is booting of a SCSI device, it must be connected to a SCSI card, if you can move this SCSI card to your Ultra 5 and put it in the same PCI slot, chances are that you will be able to boot from it. Similarly the Ultra 10 should be able to boot from the Ultra 5s harddrive. Unless they have very different version of the motherboard.
    You will probably have to change the default boot device on both boxes though.
    If your systems for some odd reason, which i can't think of at the moment, would refuse to boot of their new hosts, you could always boot the system from a jumpstart image or cdrom, mount the / partition under /mnt and run
    devfsadm -r /mnt
    this should rebuild all devicepaths and friends to match your new host.
    Then again, remember to change the boot-device parameter in your OBP, if you are lucky you should be able to just swap the parameters between the U5 and the U10.
    //Magnus

  • Deploy differences between 904 standalone and 903 J2EE and Web cache versio

    I am trying to port an application from jboss (3.0.3) to OAS 9iAS rel2. Since the application uses local interfaces in the EJBs, I am using the pre-release versions of OAS. My development environment also include ant and xdoclet. I am not using JDeveloper or TopLink and would prefer not to due to the different targets we are using.
    I have successfully portet to the 9.0.4 OC4J (standalone) version. In doing so I have collected all my EJB-class files in one jar, appName-ejb.jar, which along with the war-file and META-INF files go in the ear-file. Attempting to deploy this ear-file directly in 9.0.3 "J2EE and Webcache" version gives me the following error message in the Oracle Enterprise Manager Console: "Deployment failed: Nested exception Root Cause: Syntax error in source. Syntax error in source"
    The how-to:Implement Local Interface (cmplocal) also use a single jar-file, cmplocal-ejb.jar. The description is targeted toward OC4J standalone. I have not attempted to deploy this ear-file in the "j2ee and webcache" version.
    The Petstore demo ear-file (ref: "O9iAS Containers for J2EE User's Guide (9.0.3)") has all its beans in separate jar-files and deploys without any problems. The documentation also indicate that all beans has to be in separate jar-files in the root of the ear-file i.e this is a requirement. Do I understand the documentation correctly?
    Why the difference between the standalone version and the next version? Does not the possibility having to maintain a a large number of bean-jar-files in the root of the ear-file make development/deployment much more difficult than necessary?
    Does there exist any batch-workarounds?
    Dag

    I am trying to port an application from jboss (3.0.3) to OAS 9iAS rel2. Since the application uses local interfaces in the EJBs, I am using the pre-release versions of OAS. My development environment also include ant and xdoclet. I am not using JDeveloper or TopLink and would prefer not to due to the different targets we are using.
    I have successfully portet to the 9.0.4 OC4J (standalone) version. In doing so I have collected all my EJB-class files in one jar, appName-ejb.jar, which along with the war-file and META-INF files go in the ear-file. Attempting to deploy this ear-file directly in 9.0.3 "J2EE and Webcache" version gives me the following error message in the Oracle Enterprise Manager Console: "Deployment failed: Nested exception Root Cause: Syntax error in source. Syntax error in source"
    The how-to:Implement Local Interface (cmplocal) also use a single jar-file, cmplocal-ejb.jar. The description is targeted toward OC4J standalone. I have not attempted to deploy this ear-file in the "j2ee and webcache" version.
    The Petstore demo ear-file (ref: "O9iAS Containers for J2EE User's Guide (9.0.3)") has all its beans in separate jar-files and deploys without any problems. The documentation also indicate that all beans has to be in separate jar-files in the root of the ear-file i.e this is a requirement. Do I understand the documentation correctly?
    Why the difference between the standalone version and the next version? Does not the possibility having to maintain a a large number of bean-jar-files in the root of the ear-file make development/deployment much more difficult than necessary?
    Does there exist any batch-workarounds?
    Dag

  • Dependency between web project and bean module

    i am using netbeans [6.7],
    i created web application has different packages
    i created bean module separated from the web project
    i want this module use one of package from that web project
    so how can i do that by netbeans?
    also
    want the web project to access the beans from the bean module
    so how can i do that by netbeans?
    i wait the reply
    as soon as possible
    thanks for all

    hm, its not good when two different projects have bidirectional dependencies - in my opinion that means they should be part of the same code base and thus the same project.
    A possible solution I see is:
    - take the part that the 'bean' project depends on in the web project out of the web project and put it in the bean project itself
    - add the bean project as a project dependency to the web project (thus bringing back the logic you just moved to the bean project)
    You can manage library and project dependencies in the project properties. Right click on the project name in the tree view to get a menu that holds the function to do that (cannot remember its exact name right now and I don't have Netbeans handy).

  • How to config Rules between Service Identity and Relying Party Application in Azure ACS?

    I am going to implement an Authorization Server talks to ACS OAuth2 endpoint with Java following this
    article.
    First, I created a Service Identity using the ACS Management Service by OData protocol, and then add a password credential in ACS Management Portal.
    Id: "22194691",
    Name: "oauth2-client-sample",
    Description: "Test",
    RedirectAddress: "http://localhost:8080",
    SystemReserved: false
    Second, I created a relying party application in ACS Management Portal with no Identity Providers, assume that its ID is 22194640 and its Realm is "https://oauth2-res-sample.herokuapp.com/".
    Third, I created a Delegation by ACS Management Service and got an Authorization Code(for example, XkbSXdM0d0v8wQ835hvKUg==) from ACS,
    POST /v2/mgmt/service/Delegations
    Authorization: Bearer XXXX(SWT from ACS)
    Content-Type: application/json
    {"ServiceIdentityId": "22194691", "RelyingPartyId": "22194640",
    "NameIdentifier": "[email protected]", "IdentityProvider": "WAAD"}
    At last, I posted the authorization code and service identity to ACS to request an Access Token,
    POST v2/OAuth2-13
    Content-Type: application/x-www-form-urlencoded
    grant_type=authorization_code&client_id=oauth2-client-sample
    &client_secret=xxxxxxxx&code=XkbSXdM0d0v8wQ835hvKUg%3D%3D
    &redirect_uri=http%3A%2F%2Flocalhost%3A8080
    &scope=https%3A%2F%2Foauth2-res-sample.herokuapp.com%2F
    But I got the following error from ACS,
    error: "invalid_request" error_description: "ACS50000: There was an error issuing a token. ACS60000: An error occurred while processing rules for relying party 'https://oauth2-res-sample.herokuapp.com/'
    using the service identity or identity provider named 'oauth2-client-sample'. ACS60000: Policy engine execution error. Trace ID: e8a1fa8c-19d8-4271-8095-80938ea45e69 Correlation ID: 82a0e83e-202f-4957-8871-cdcdf927b512 Timestamp: 2015-02-23 02:21:34Z"
    This is the Rule Group for the relying party application, pass through all the first claims to output. But
    I don't know what's wrong.

    Hello Cary!
    Request your confirmation if you could resolve the problem stated above? If no, please let us know at the earliest and we'll be glad to help. If yes, please share your valuable inputs for community's reference.
    Thank you,
    Arvind

  • Difference between SP datasource and object data source

    Hi,
    I have a requirement  to query large list and display data in a view similar to SPlist view(Including Filtering, Sorting, Paging).
    I have implemented the same using SPGridview and Object data source.
    Can you please comment on the performance of the spgridview's(soritng, filterting and paging) using object data source and spdatasource.
    Thanks,
    Sunitha

    Hi,
    Kindly Prefer SPDatasource over Object datasource
    http://www.sharepointnutsandbolts.com/2008/06/spdatasource-every-sharepoint-developer.html
    Please remember to click 'Mark as Answer' on the answer if it helps you

  • Is there any dependency between osx version and boot environment???? I can not boot my bootabe software on my macbook pro with osx 10.8!!!!

    even i can not boot lion 10.7.4 bootable dvd!!!!!
    what can i do??? the boot device will apear in boot menu but they do not boot

    Then, I imagine there's a problem with the DVD.
    http://www.macworld.com/article/1161069/make_a_bootable_lion_installer.html
    It would run a lot faster on another drive or pocket drive.

  • Syncing a mediathek between a MAC and a WIN-PC possible with Dropbox?

    Hi everybody,
    Is this a WIN or a MAC question? I try it here first... X)
    I have a MAC at one place - and a WIN-PC at another place. Both have a Drobox-folder (Dropbox = a sync tool) to keep the data on them in sync.
    That works like a charm, besides iTunes:
    I would like to use the exact same iTunes mediathek on the WIN-PC and on the MAC.
    (It must be a iTunes-based solution, as I would like to sync my iPhone on both locations with iTunes - so I would have fresh podcasts etc. going to AND coming from work, and do not need to think on which computer I include new Music, Apps etc.).
    Physically I actually have the same mediathek files on both computers already - via Dropbox - but the location description in the iTunes library file is not matching:
    MAC: <key>Location</key><string>file://localhost/Users/xxx/Dropbox/Music/iTunes/...</string>;
    WIN: <key>Location</key><string>file://localhost/C:/Users/xxx/Documents/My%20Dropbox/Musi /iTunes/...</string>;
    So it does not work yet.
    Is there a way to do what I want? (Like a script to fire up iTunes, that replaces the locations each time..?)
    I assume there is quite some people with this problem...
    Thank you very much for any help/hint!!
    Best wishes,
    Daniel

    Hi,
    I never really gave a final report to this question:
    Things did not really work the way I wanted (having the libary file itself synced).
    What I did in the end, was letting the media files sync via Dropbox (i.e., letting the iTunes Media Files folder of both computers point to the same folder in Dropbox).
    I kept local library files for both computers, but used a HEX editor to change the library ID of one computer to the library ID of the other computer (so that my iPhone syncs to both computer). If you want to do this, there are several post on how to edit the library file with a HEX editor. For me, it did work with iTunes 9, it did not work with iTunes 10 (as far as I remember). So you might which to downgrade before attempting this.
    Best wishes,
    Daniel

  • Difference between cross applications and CRM cross applications

    Dear All ,
    I would like to know the difference between cross aplications and CRM cross applications ?
    Please help to understand the difference
    Regards,
    Srini.

    Hi Eswar Ram,
    Thanks for your response.i have same question?
    I would like to know the difference between the following ( Cross Application compenent/CRM Cross Application component)
    1. SPRO> IMG> Cross Application components
    2. SPRO> IMG> Customer Relationship Management--> CRM Cross Application componenet.
    Regards,
    Silpa.
    Edited by: silpa reddy on Mar 5, 2009 4:11 AM

  • Alert log:  failed to establish dependency between database and diskgroup

    Hi, I have a database 11.2.0 with ASM. When I startup an instance I see this error in alert log:
    ERROR: failed to establish dependency between database SPBUFOR and diskgroup resource ora.SPBUFOR_FLASH.dg
    SPBUFOR_FLASH is a diskgroup where a flash recovery area is located.
    Database is opened cleanly but I need to know that everything works fine.
    How can I resolve this issue ?

    ASM ALERT LOG:
    Wed Oct 05 15:35:12 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.2.0/grid/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =0
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    NOTE: Volume support enabled
    Starting up:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Automatic Storage Management option.
    Using parameter settings in server-side spfile +CSA_DATA/asm/asmparameterfile/registry.253.758208241
    System parameters with non-default values:
    large_pool_size = 12M
    instance_type = "asm"
    remote_login_passwordfile= "EXCLUSIVE"
    asm_diskgroups = "CSA_FLASH"
    asm_diskgroups = "SPBUFOR_DATA"
    asm_diskgroups = "SPBUFOR_FLASH"
    asm_power_limit = 1
    diagnostic_dest = "/u01/app/oracle"
    Wed Oct 05 15:35:13 2011
    PMON started with pid=2, OS id=24108
    Wed Oct 05 15:35:14 2011
    VKTM started with pid=3, OS id=24112 at elevated priority
    VKTM running at (10)millisec precision with DBRM quantum (100)ms
    Wed Oct 05 15:35:14 2011
    GEN0 started with pid=4, OS id=24118
    Wed Oct 05 15:35:14 2011
    DIAG started with pid=5, OS id=24122
    Wed Oct 05 15:35:14 2011
    PSP0 started with pid=6, OS id=24127
    Wed Oct 05 15:35:14 2011
    DIA0 started with pid=7, OS id=24131
    Wed Oct 05 15:35:14 2011
    MMAN started with pid=8, OS id=24135
    Wed Oct 05 15:35:14 2011
    DBW0 started with pid=9, OS id=24139
    Wed Oct 05 15:35:14 2011
    LGWR started with pid=10, OS id=24143
    Wed Oct 05 15:35:14 2011
    CKPT started with pid=11, OS id=24148
    Wed Oct 05 15:35:14 2011
    SMON started with pid=12, OS id=24157
    Wed Oct 05 15:35:14 2011
    RBAL started with pid=13, OS id=24163
    Wed Oct 05 15:35:14 2011
    GMON started with pid=14, OS id=24173
    Wed Oct 05 15:35:14 2011
    MMON started with pid=15, OS id=24177
    Wed Oct 05 15:35:14 2011
    MMNL started with pid=16, OS id=24181
    ORACLE_BASE from environment = /u01/app/oracle
    Wed Oct 05 15:35:15 2011
    SQL> ALTER DISKGROUP ALL MOUNT
    NOTE: Diskgroups listed in ASM_DISKGROUPS are
    CSA_FLASH
    SPBUFOR_DATA
    SPBUFOR_FLASH
    Diskgroup with spfile:CSA_DATA
    NOTE: cache registered group CSA_DATA number=1 incarn=0x75fe76df
    NOTE: cache began mount (first) of group CSA_DATA number=1 incarn=0x75fe76df
    NOTE: cache registered group CSA_FLASH number=2 incarn=0x760e76e0
    NOTE: cache began mount (first) of group CSA_FLASH number=2 incarn=0x760e76e0
    NOTE: cache registered group SPBUFOR_DATA number=3 incarn=0x760e76e1
    NOTE: cache began mount (first) of group SPBUFOR_DATA number=3 incarn=0x760e76e1
    NOTE: cache registered group SPBUFOR_FLASH number=4 incarn=0x761e76e2
    NOTE: cache began mount (first) of group SPBUFOR_FLASH number=4 incarn=0x761e76e2
    NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libasm.so
    NOTE: Assigning number (1,0) to disk (ORCL:CSA_D1)
    NOTE: Assigning number (1,1) to disk (ORCL:CSA_D2)
    NOTE: Assigning number (2,0) to disk (ORCL:CSA_F1)
    NOTE: Assigning number (2,1) to disk (ORCL:CSA_F2)
    NOTE: Assigning number (3,0) to disk (ORCL:SPBUFOR_D1)
    NOTE: Assigning number (3,1) to disk (ORCL:SPBUFOR_D2)
    NOTE: Assigning number (3,2) to disk (ORCL:SPBUFOR_D3)
    NOTE: Assigning number (4,0) to disk (ORCL:SPBUFOR_F1)
    NOTE: Assigning number (4,1) to disk (ORCL:SPBUFOR_F2)
    NOTE: start heartbeating (grp 1)
    kfdp_query(CSA_DATA): 6
    kfdp_queryBg(): 6
    NOTE: cache opening disk 0 of grp 1: CSA_D1 label:CSA_D1
    NOTE: F1X0 found on disk 0 au 2 fcn 0.0
    NOTE: cache opening disk 1 of grp 1: CSA_D2 label:CSA_D2
    NOTE: cache mounting (first) external redundancy group 1/0x75FE76DF (CSA_DATA)
    NOTE: cache recovered group 1 to fcn 0.25573
    NOTE: LGWR attempting to mount thread 1 for diskgroup 1 (CSA_DATA)
    NOTE: LGWR found thread 1 closed at ABA 4.2486
    NOTE: LGWR mounted thread 1 for diskgroup 1 (CSA_DATA)
    NOTE: LGWR opening thread 1 at fcn 0.25573 ABA 5.2487
    NOTE: cache mounting group 1/0x75FE76DF (CSA_DATA) succeeded
    NOTE: cache ending mount (success) of group CSA_DATA number=1 incarn=0x75fe76df
    NOTE: start heartbeating (grp 2)
    kfdp_query(CSA_FLASH): 8
    kfdp_queryBg(): 8
    NOTE: cache opening disk 0 of grp 2: CSA_F1 label:CSA_F1
    NOTE: F1X0 found on disk 0 au 2 fcn 0.0
    NOTE: cache opening disk 1 of grp 2: CSA_F2 label:CSA_F2
    NOTE: cache mounting (first) external redundancy group 2/0x760E76E0 (CSA_FLASH)
    NOTE: cache recovered group 2 to fcn 0.49881
    NOTE: LGWR attempting to mount thread 1 for diskgroup 2 (CSA_FLASH)
    NOTE: LGWR found thread 1 closed at ABA 3.5793
    NOTE: LGWR mounted thread 1 for diskgroup 2 (CSA_FLASH)
    NOTE: LGWR opening thread 1 at fcn 0.49881 ABA 4.5794
    NOTE: cache mounting group 2/0x760E76E0 (CSA_FLASH) succeeded
    NOTE: cache ending mount (success) of group CSA_FLASH number=2 incarn=0x760e76e0
    NOTE: start heartbeating (grp 3)
    kfdp_query(SPBUFOR_DATA): 10
    kfdp_queryBg(): 10
    NOTE: cache opening disk 0 of grp 3: SPBUFOR_D1 label:SPBUFOR_D1
    NOTE: F1X0 found on disk 0 au 2 fcn 0.0
    NOTE: cache opening disk 1 of grp 3: SPBUFOR_D2 label:SPBUFOR_D2
    NOTE: cache opening disk 2 of grp 3: SPBUFOR_D3 label:SPBUFOR_D3
    NOTE: cache mounting (first) external redundancy group 3/0x760E76E1 (SPBUFOR_DATA)
    NOTE: cache recovered group 3 to fcn 0.317867
    NOTE: LGWR attempting to mount thread 1 for diskgroup 3 (SPBUFOR_DATA)
    NOTE: LGWR found thread 1 closed at ABA 3.8570
    NOTE: LGWR mounted thread 1 for diskgroup 3 (SPBUFOR_DATA)
    NOTE: LGWR opening thread 1 at fcn 0.317867 ABA 4.8571
    NOTE: cache mounting group 3/0x760E76E1 (SPBUFOR_DATA) succeeded
    NOTE: cache ending mount (success) of group SPBUFOR_DATA number=3 incarn=0x760e76e1
    NOTE: start heartbeating (grp 4)
    kfdp_query(SPBUFOR_FLASH): 12
    kfdp_queryBg(): 12
    NOTE: cache opening disk 0 of grp 4: SPBUFOR_F1 label:SPBUFOR_F1
    NOTE: F1X0 found on disk 0 au 2 fcn 0.0
    NOTE: cache opening disk 1 of grp 4: SPBUFOR_F2 label:SPBUFOR_F2
    NOTE: cache mounting (first) external redundancy group 4/0x761E76E2 (SPBUFOR_FLASH)
    NOTE: cache recovered group 4 to fcn 0.16114
    NOTE: LGWR attempting to mount thread 1 for diskgroup 4 (SPBUFOR_FLASH)
    NOTE: LGWR found thread 1 closed at ABA 2.1922
    NOTE: LGWR mounted thread 1 for diskgroup 4 (SPBUFOR_FLASH)
    NOTE: LGWR opening thread 1 at fcn 0.16114 ABA 3.1923
    NOTE: cache mounting group 4/0x761E76E2 (SPBUFOR_FLASH) succeeded
    NOTE: cache ending mount (success) of group SPBUFOR_FLASH number=4 incarn=0x761e76e2
    kfdp_query(CSA_DATA): 13
    kfdp_queryBg(): 13
    NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 1
    SUCCESS: diskgroup CSA_DATA was mounted
    kfdp_query(CSA_FLASH): 14
    kfdp_queryBg(): 14
    NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 2
    SUCCESS: diskgroup CSA_FLASH was mounted
    kfdp_query(SPBUFOR_DATA): 15
    kfdp_queryBg(): 15
    NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 3
    SUCCESS: diskgroup SPBUFOR_DATA was mounted
    kfdp_query(SPBUFOR_FLASH): 16
    kfdp_queryBg(): 16
    NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 4
    SUCCESS: diskgroup SPBUFOR_FLASH was mounted
    SUCCESS: ALTER DISKGROUP ALL MOUNT
    SQL> ALTER DISKGROUP ALL ENABLE VOLUME ALL
    SUCCESS: ALTER DISKGROUP ALL ENABLE VOLUME ALL
    NOTE: diskgroup resource ora.CSA_DATA.dg is online
    NOTE: diskgroup resource ora.CSA_FLASH.dg is online
    NOTE: diskgroup resource ora.SPBUFOR_DATA.dg is online
    NOTE: diskgroup resource ora.SPBUFOR_FLASH.dg is online
    Wed Oct 05 15:35:44 2011
    Starting background process ASMB
    Wed Oct 05 15:35:44 2011
    ASMB started with pid=18, OS id=24330
    Thu Oct 06 11:48:19 2011
    SQL> alter diskgroup SPBUFOR_DATA check all
    NOTE: starting check of diskgroup SPBUFOR_DATA
    kfdp_checkDsk(): 17
    kfdp_checkDsk(): 18
    Thu Oct 06 11:48:30 2011
    kfdp_checkDsk(): 19
    SUCCESS: check of diskgroup SPBUFOR_DATA found no errors
    SUCCESS: alter diskgroup SPBUFOR_DATA check all
    Thu Oct 06 11:48:56 2011
    SQL> alter diskgroup SPBUFOR_FLASH check all
    NOTE: starting check of diskgroup SPBUFOR_FLASH
    kfdp_checkDsk(): 20
    kfdp_checkDsk(): 21
    SUCCESS: check of diskgroup SPBUFOR_FLASH found no errors
    SUCCESS: alter diskgroup SPBUFOR_FLASH check all

Maybe you are looking for

  • How to use one af:query for multiple VOs

    Hi all, How can we use an af:query component to query on multiple VOs? Think of a page with search area and a tabbed pannel with two tables based on different view objects. Any ideas? Thanks Version ADF Business Components 11.1.1.56.60 Java(TM) Platf

  • How do I get rid of a password that will not allow me to access an account

    I have an account that will not let me access it with the username and password that I have recorded. How do I get rid of it so that I can crfeate a new username and password?

  • Can't get apache to parse xml pages

    have installed xmlbean cocoon jdk and still can't get apache to parse the xml pages please help...does any one have a win32 compiled mod_xml?

  • How to export image sequence with a specific number

    Hello I'm looking how, like in after, I can export an image sequence with a starting specific number. I spend hours to rename images with Renamer, boring, stupid! Thank you for helping Alex

  • Files are off line now encoder wont work

    I have been working on a slideshow in Premiere pro CS4 saved all my files went over to After effects for a day came back to Premiere and all of my files were off line I tried to reconnect them and it said files are to big (Something like that) so I d