Best strategy for data definition handling while using OCCI

Hi, subject says all.
Assuming to use OCCI for data manipulation (e.g. object navigation), what's the best strategy to handle definitions in the same context ?
I mean primarily dynamic creation of tables and columns.
Apparent choices are:
- SQL from OCCI.
- use OCI in parallel.
Did I miss anything ? Thanks for any suggestion.

Agreeing with Kappy that your "secondary" backups should be made with a different app. You can use Disk Utility as he details, or a "cloning" app such as CarbonCopyCloner or SuperDuper that can update the clone periodically, rather than erasing and re-copying from scratch.
[CarbonCopyCloner|http://www.bombich.com> is donationware; [SuperDuper|http://www.shirt-pocket.com/SuperDuper/SuperDuperDescription.html] has a free version, but you need the paid one (about $30) to do updates instead of full replacements, or scheduling.
If you do decide to make duplicate TM backups, you can do that. Just tell TM when you want to change disks (thus it's a good idea to give them different names). But there's no reason to erase and do full backups; after the first full backup to each drive, Time Machine will back up only the changes made since the last backup +*to that disk.+* Each is completely independent.

Similar Messages

  • OSB: Cannot acquire data source error while using JCA DBAdapter in OSB

    Hi All,
    I've entered 'Cannot acquire data source' error while using JCA DBAdapter in OSB.
    Error infor are as follows:
    The invocation resulted in an error: Invoke JCA outbound service failed with application error, exception: com.bea.wli.sb.transports.jca.JCATransportException: oracle.tip.adapter.sa.api.JCABindingException: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/DBAdapter1/RetrievePersonService [ RetrievePersonService_ptt::RetrievePersonServiceSelect(RetrievePersonServiceSelect_inputParameters,PersonTCollection) ] - WSIF JCA Execute of operation 'RetrievePersonServiceSelect' failed due to: Could not create/access the TopLink Session.
    This session is used to connect to the datastore.
    Caused by Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.ValidationException
    Exception Description: Cannot acquire data source [jdbc/soademoDatabase].
    Internal Exception: javax.naming.NameNotFoundException: Unable to resolve 'jdbc.soademoDatabase'. Resolved 'jdbc'; remaining name 'soademoDatabase'.
    ; nested exception is:
    BINDING.JCA-11622
    Could not create/access the TopLink Session.
    This session is used to connect to the datastore.
    JNDI Name for the Database pool: eis/DB/soademoDatabase
    JNDI Name for the Data source: jdbc/soademoDatabase
    I created a basic DBAdapter in JDeveloper, got the xsd file, wsdl file, .jca file and the topLink mapping file imported them into OSB project.
    Then I used the .jca file to generate a business service, and tested, then the error occurs as described above.
    Login info in RetrievePersonService-or-mappings.xml
    <login xsi:type="database-login">
    <platform-class>org.eclipse.persistence.platform.database.oracle.Oracle9Platform</platform-class>
    <user-name></user-name>
    <connection-url></connection-url>
    </login>
    jca file content are as follows:
    <adapter-config name="RetrievePersonService" adapter="Database Adapter" wsdlLocation="RetrievePersonService.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
    <connection-factory location="eis/DB/soademoDatabase" UIConnectionName="Connection1" adapterRef=""/>
    <endpoint-interaction portType="RetrievePersonService_ptt" operation="RetrievePersonServiceSelect">
    <interaction-spec className="oracle.tip.adapter.db.DBReadInteractionSpec">
    <property name="DescriptorName" value="RetrievePersonService.PersonT"/>
    <property name="QueryName" value="RetrievePersonServiceSelect"/>
    <property name="MappingsMetaDataURL" value="RetrievePersonService-or-mappings.xml"/>
    <property name="ReturnSingleResultSet" value="false"/>
    <property name="GetActiveUnitOfWork" value="false"/>
    </interaction-spec>
    </endpoint-interaction>
    </adapter-config>
    RetrievePersonService_db.wsdl are as follows:
    <?xml version="1.0" encoding="UTF-8"?>
    <WL5G3N0:definitions name="RetrievePersonService-concrete" targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/KnowledeMgmtSOAApplication/AdapterJDevProject/RetrievePersonService" xmlns:WL5G3N0="http://schemas.xmlsoap.org/wsdl/" xmlns:WL5G3N1="http://xmlns.oracle.com/pcbpel/adapter/db/KnowledeMgmtSOAApplication/AdapterJDevProject/RetrievePersonService" xmlns:WL5G3N2="http://schemas.xmlsoap.org/wsdl/soap/">
    <WL5G3N0:import location="RetrievePersonService.wsdl" namespace="http://xmlns.oracle.com/pcbpel/adapter/db/KnowledeMgmtSOAApplication/AdapterJDevProject/RetrievePersonService"/>
    <WL5G3N0:binding name="RetrievePersonService_ptt-binding" type="WL5G3N1:RetrievePersonService_ptt">
    <WL5G3N2:binding style="document" transport="http://www.bea.com/transport/2007/05/jca"/>
    <WL5G3N0:operation name="RetrievePersonServiceSelect">
    <WL5G3N2:operation soapAction="RetrievePersonServiceSelect"/>
    <WL5G3N0:input>
    <WL5G3N2:body use="literal"/>
    </WL5G3N0:input>
    <WL5G3N0:output>
    <WL5G3N2:body use="literal"/>
    </WL5G3N0:output>
    </WL5G3N0:operation>
    </WL5G3N0:binding>
    <WL5G3N0:service name="RetrievePersonService_ptt-bindingQSService">
    <WL5G3N0:port binding="WL5G3N1:RetrievePersonService_ptt-binding" name="RetrievePersonService_ptt-bindingQSPort">
    <WL5G3N2:address location="jca://eis/DB/soademoDatabase"/>
    </WL5G3N0:port>
    </WL5G3N0:service>
    </WL5G3N0:definitions>
    Any suggestion is appricated .
    Thanks in advance!
    Edited by: user11262117 on Jan 26, 2011 5:28 PM

    Hi Anuj,
    Thanks for your reply!
    I found that the data source is registered on server soa_server1 as follows:
    Binding Name: jdbc.soademoDatabase
    Class: weblogic.jdbc.common.internal.RmiDataSource_1033_WLStub
    Hash Code: 80328036
    toString Results: ClusterableRemoteRef(8348400613458600489S:10.2.1.143:[8001,8001,-1,-1,-1,-1,-1]:base_domain:soa_server1 [8348400613458600489S:10.2.1.143:[8001,8001,-1,-1,-1,-1,-1]:base_domain:soa_server1/291])/291
    Binding Name: jdbc.SOADataSource
    Class: weblogic.jdbc.common.internal.RmiDataSource_1033_WLStub
    Hash Code: 92966755
    toString Results: ClusterableRemoteRef(8348400613458600489S:10.2.1.143:[8001,8001,-1,-1,-1,-1,-1]:base_domain:soa_server1 [8348400613458600489S:10.2.1.143:[8001,8001,-1,-1,-1,-1,-1]:base_domain:soa_server1/285])/285
    I don't know how to determine which server the DBAdapter is targetted to.
    But I found the following information:
    Under Deoloyment->DBAdapter->Monitoring->Outbound Connection Pools
    Outbound Connection Pool Server State Current Connections Created Connections
    eis/DB/SOADemo AdminServer Running 1 1
    eis/DB/SOADemo soa_server1 Running 1 1
    eis/DB/soademoDatabase AdminServer Running 1 1
    eis/DB/soademoDatabase soa_server1 Running 1 1
    The DbAdapter is related to the following files:
    C:\ Oracle\ Middleware\ home_11gR1\ Oracle_SOA1\ soa\ connectors\ DbAdapter. rar
    C:\ Oracle\ Middleware\ home_11gR1\ Oracle_SOA1\ soa\ DBPlan\ Plan. xml
    I unzipped DbAdapter.rar, opened weblogic-ra.xml and found that there's only one data source is registered:
    <?xml version="1.0"?>
    <weblogic-connector xmlns="http://www.bea.com/ns/weblogic/90">
    <enable-global-access-to-classes>true</enable-global-access-to-classes>
    <outbound-resource-adapter>
    <default-connection-properties>
    <pool-params>
    <initial-capacity>1</initial-capacity>
    <max-capacity>1000</max-capacity>
    </pool-params>
    <properties>
    <property>
    <name>usesNativeSequencing</name>
    <value>true</value>
    </property>
    <property>
    <name>sequencePreallocationSize</name>
    <value>50</value>
    </property>
    <property>
    <name>defaultNChar</name>
    <value>false</value>
    </property>
    <property>
    <name>usesBatchWriting</name>
    <value>true</value>
    </property>
    <property>
    <name>usesSkipLocking</name>
    <value>true</value>
    </property>
    </properties>
              </default-connection-properties>
    <connection-definition-group>
    <connection-factory-interface>javax.resource.cci.ConnectionFactory</connection-factory-interface>
    <connection-instance>
    <jndi-name>eis/DB/SOADemo</jndi-name>
              <connection-properties>
                   <properties>
                   <property>
                   <name>xADataSourceName</name>
                   <value>jdbc/SOADataSource</value>
                   </property>
                   <property>
                   <name>dataSourceName</name>
                   <value></value>
                   </property>
                   <property>
                   <name>platformClassName</name>
                   <value>org.eclipse.persistence.platform.database.Oracle10Platform</value>
                   </property>
                   </properties>
              </connection-properties>
    </connection-instance>
    </connection-definition-group>
    </outbound-resource-adapter>
    </weblogic-connector>
    Then I decided to use eis/DB/SOADemo for testing.
    For JDeveloper project, after I deployed to weblogic server, it works fine.
    But for OSB project referencing wsdl, jca and mapping file from JDeveloper project, still got the same error as follows:
    BEA-380001: Invoke JCA outbound service failed with application error, exception:
    com.bea.wli.sb.transports.jca.JCATransportException: oracle.tip.adapter.sa.api.JCABindingException: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/DBAdapterTest/DBReader [ DBReader_ptt::DBReaderSelect(DBReaderSelect_inputParameters,PersonTCollection) ] - WSIF JCA Execute of operation 'DBReaderSelect' failed due to: Could not create/access the TopLink Session.
    This session is used to connect to the datastore.
    Caused by Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.ValidationException
    Exception Description: Cannot acquire data source [jdbc/SOADataSource].
    Internal Exception: javax.naming.NameNotFoundException: Unable to resolve 'jdbc.SOADataSource'. Resolved 'jdbc'; remaining name 'SOADataSource'.
    ; nested exception is:
    BINDING.JCA-11622
    Could not create/access the TopLink Session.
    This session is used to connect to the datastore.
    Caused by Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.ValidationException
    Exception Description: Cannot acquire data source [jdbc/SOADataSource].
    Internal Exception: javax.naming.NameNotFoundException: Unable to resolve 'jdbc.SOADataSource'. Resolved 'jdbc'; remaining name 'SOADataSource'.
    You may need to configure the connection settings in the deployment descriptor (i.e. DbAdapter.rar#META-INF/weblogic-ra.xml) and restart the server. This exception is considered not retriable, likely due to a modelling mistake.
    It almost drive me crazy!!:-(
    What's the purpose of 'weblogic-ra.xml' under the folder of 'C:\Oracle\Middleware\home_11gR1\Oracle_OSB1\lib\external\adapters\META-INF'?
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

  • Best strategy for variable aggregate custom component in dataTable

    Hey group, I've got a question.
    I'd like to write a custom component to display a series of editable Things in a datatable, but the structure of each Thing will vary depending on what type of Thing it is. So, some Things will display radio button groups (with each radio button selecting a small set of additional input elements, so we have a vertical array radio buttons and beside each radio button, a small number of additional input elements), some will display text-entry fields, and so on.
    I'm wondering what the best strategy for tackling this sort of thing is. I'm sort of thinking I'll need to do something like dynamically add to the component tree in my custom component's encodeBegin(), and purge the extra (sub-) components in encodeEnd().
    Decoding will be a bit of a challenge, maybe.
    Or do I simply instantiate (via constructor calls, not createComponent()) the components I want and explicitly call their encode*() and decode() methods, without adding them to the view tree?
    To add to the fun of all this, I'm only just learning Faces (having gone through the Dudney book, Mastering JSF, and writing some simpler custom components) and I don't have experience with anything other than plain vanilla JSP. (No EJB, no Struts, no Tapestry, no spiffy VisualDevStudioWysiwyg++ [bah humbug, I'm an emacs user]). I'm using JSP 2.0, JSF 1.1_01, JBoss 4.0.1 and JDK 1.4.2. No, I won't upgrade to 1.5 (yet).
    Any hints, pointers to good sample code? I've looked at some of the sample code that came with the RI and I've tried to navigate the JSF Blueprints stuff, but I haven't really found anything on aggregating components into a custom component. Did I miss something obvious?
    If this isn't a good question, please let me know how I can sharpen it up a bit.
    Thanks.
    John.

    Hi,
    We're doing something very similar. I had a look at the Tomahawk Date component, and it seems to dynamically created InputText components in the encodeEnd(). However, it doesn't decode this directly (it only expects a single textual value). I expect you may have to check the request yourself in decode().
    Other ideas would be appreciated, though - I'm still new to JSF.

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Best strategy for migrating GL 6 DW CS4?

    First I gotta say that -- as decade-long user of several Adobe products -- Adobe has really screwed over long-time users of Adobe Golive. After a week of messing around with something that should be a slam dunk (considering the substatial amount of $ I've paid to Adobe over the years for multiple copies of GoLive 5, 6 and CS2), I can tell you Adobe's GL > DW migration utility ONLY works for sites created FROM SCRATCH in GL CS2 (they must have the web-content, web-data folder structure). This means that Adobe's migration utility only works for maybe 10% - 15% of the GoLive sites out there, and about 10% - 15% of Adobe's loyal GoLive customers, and it particularly screws over Adobe's longtime GoLive customers. Sweet! ( (Just like Adobe screwed over users of PhotoStyler -- which was a better image editor than PhotoShop, BTW.) Obviously, I would walk away from Adobe and make sure I never ever paid them another cent, but the Republican-Democrat "free market economy" has made sure I don't have that option.
    So I've gotta make Dreamweaver work, which means I've gotta migrate several large sites (the biggest has 5,000 static pages and is about 2 gigs in size) from GoLive 6 (or GoLive CS2 with the older folder structure) to Dreamweaver CS4.
    Which brings me to my question -- what's the best strategy for doing ths? Adobe's migration utility is a bad joke, so here's the alternative, real world plan I've come up with. I'd apreciate any suggestions or comments...
    1) create copies of all my GL components in the content folders for my GL site (not in the GoLIve components folder)
    2) replace the content of all GoLive compnents (in the GoLive components folder) with unque character strings, so that instead of a header with images and text, my old header would look something like xxxyyyzzz9
    3) create a new folder called astoni in the root of my hard drive. Copy all my GoLIve web site content (HTML, images, SWF, etc.) into astoni in exactly the structure it was in with GL
    4) create a new Dreamweaver site by defining astoni as the local location for my site, astoni\images as the location for images, etc.
    5) use Dreamweaver and open the newly defined astoni site. Then open each of the GoLive components I copied into a content level folder in astoni, and drag each into the Dreamweaver Assets/Library pane, in order to create library items just like my old GoLive components
    6) use Dreamweaver to Search & Replace out the unique text strings like xxxyyyzzz9 with the content of my new DW library items
    7) refresh site view to make all the links hook up...
    Thanks for your help. Hope this discussion helps others too...

    Instead of ragging on people who are familiar with DW and Site Development, you should be gracious and accept the practical advice you've been given.  A "best strategy" would be to read some tutorials and learn how to work with HTML, CSS and server-side technologies. Without some basic code skills, you're going to find DW an uphill, if not impossible battle.
    Frankly, I don't have free time to hand-hold someone through the excruciating process of migrating a 5,000 page static site from GoLive  to DW. And I doubt anyone else in this forum has either.  We're not Adobe employees.  We don't get paid to participate here.  We are all product users JUST LIKE YOU.
    I'm sorry you're frustrated.  I'm also sorry for your clients. But the problem you have now isn't Adobe's fault. It's yours for not keeping up with server-side technologies or handing-off these huge static sites to more capable web developers.  I'm not saying you need to buy anyone's books, but they are good resources for people willing to learn new things.
    That said, maybe you should stick with GoLive.  The software doesn't have an expiration date on it and will continue working long into the future.  If you're happy using GL, keep using it to maintain your legacy sites. At the same time learn X/HTML, CSS & PHP or ASP.  Use DW CS4 for the creation of new projects.
    FREE Tutorial Links:
    HTML & CSS Tutorials - http://w3schools.com/
    From   Tables to CSS Web Design Part 1 -
    http://www.adobe.com/devnet/dreamweaver/articles/table_to_css_pt1.html
    From   Tables to CSS Web Design Part 2 -
    http://www.adobe.com/devnet/dreamweaver/articles/table_to_css_pt2.html
    Taking  a Fireworks (or Photoshop) comp to a CSS based layout in DW
    http://www.adobe.com/devnet/fireworks/articles/web_standards_layouts_pt1.html
    Creating  your first website in DW CS4 -
    http://www.adobe.com/devnet/dreamweaver/articles/first_cs4_website_pt1.html
    Guidance  on when to use DW Templates, Library Items and SSIs -
    http://www.adobe.com/devnet/dreamweaver/articles/ssi_lbi_template.html
    Best of luck,
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    www.alt-web.com/
    www.twitter.com/altweb
    www.alt-web.blogspot.com

  • Best practice for data migration install v1.40 - Error 2732 Directory manag

    Hi
    I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
    Prerequisite error
    Installation program stops with missing file error
    The following file was not found
    ... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
    The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
    Windows installer log displays
    Error 2732 Directory Manager not initialized
    SAP note 1527151 does not exist or is internal.
    Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
    Other prerequisite of .NET 3.5.1 met already.
    Patch is released since 20.11.2011 so I presume that it is a good installation set.
    Thanks,
    Alan

    Hi Alan,
    There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
    http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
    Arun

  • Best  Course  for Data Warehousing

    Hi,
                I am planning to join data warehousing course .I heard there is lot courses in data warehousing .
    Data warehousing with ETL tools or
    Data warehousing with Crystal Reports or
    Data warehousing with Business object or
    Data warehousing with Informatica or
    Data warehousing with Bo-Webel or
    Data warehousing with Cognos or
    Data warehousing with Data Stage or
    Data warehousing with MSTR or
    Data warehousing with Erwin or
    Data warehousing with oracle.
    Please suggest me which best to choose and  which have more scope because I  don't know  the ABC of data warehousing  but I have some experience in oracle.
    Is it must that I need work experience in data warehousing  then only can get a job ?Please tell me which is the best book for data warehousing which should start from scratch.  Please  give your suggestions about to my queries.
    Thanks & Regards,
    Raji

    Hi,
    Basically Datawarehouse is a concept.To develop DW , we need two tools mainly. One is ETL tool and other one is Reporting tool .
    The few famous ETL tools are
    Informatica
    Data Stage
    Few famous Reporting tools are
    Crystal Reports
    Cognos
    Business object
    As a DW developer you should aware of atleat one ETL tool and atleat one Reporting tool.The combination is your choice.It better to finout the best combination in point of job market , and then learn them.
    Erwin is Datamodel tool. It can aslo be used in DW implementation. You have already have experience on ORacle,So my adivce is go for Data warehousing with oracle or Data warehousing with Informatica .And learn one reporting tool.I donot is there any reporting tool available from ORACLE.
    My suggestion on books.
    Fundamentals of Datawarehouse by PaulRaj Ponnai and
    Datawarehouse toolkit.
    http://www.inmoncif.com/about.html is one of the best site for Datawarehouse.
    With rgds,
    Anil Kumar Sharma .P
    Assigning points is the way to say thanks in SDN site.

  • Best Strategy for Integrating Crystal/Business Objects with OpenACS Environment

    Post Author: devashanti
    CA Forum: Deployment
    I'm working for a client that uses AOL server and OpenACS for their web services/applications. I need suggestions on the best strategy to integrate a reporting solution using Business Objects XI. Ideally I'd like to send an API call from our web application's GUI to the Crystal API with report parameter values to pass into specific reports called via the API - I can get it down to one integer value being passed - or if this is not possible a way to seamlessly, from the end user perspective, move into a reporting module. We are using an Oracle backend database. I'm experienced with creating stored procedures and packages for reporting purposes.
    Although I have many years of experience integrating the Crystal active X controls into n-tier client server applications, the past few years I have had little opportunity to work with Business Objects and the newer versions of Crystal or web based solutions with Crystal Reports. I signed up to try out crystalreports.com, but I doubt my client will find this solution acceptable for security reasons as the reports are for an online invoicing system we are developing. However we can set up a reports server in-house if necessary, so it gives me some testing ground.
    Can anyone provide suggestions for a doable strategy?

    Please post this query to the Business Objects Enterprise Administration forum:
    BI Platform
    That forum is monitored by qualified technicians and you will get a faster response there. Also, all BOE queries remain in one place and thus can be easily searched in one place.
    Thank you for your understanding,
    Ludek

  • SAP Best Practices for Data Migration :repositories only on MS SQL Server ?

    Hi,
    I'm implementing the "SAP Best Practices for Data Migration" (see https://websmp109.sap-ag.de/bp-datamigration).
    As part of the installation you have to install MS SQL Server Express Edition. The installation guide contains detailed steps to do this. All repositories for Data Services should be running on SQL Server, according to the installation guide.
    The customer I'm working for now does not want to use SQL Server, but DB2, as company standard.
    So I use DB2 for the local and profiler repositories.
    I notice however that the web application http://localhost:8080/MigrationServices does not support DB2.The only database type you can select in the configuration area is MS SQL Server.
    Is this a limitation, a by design ?

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • Best practices for data migration

    Hi All,
    This thread is useful for those who can use their opportunity to share the ideas and knowledge for making better and best use of data migration using Business Objects Data Services.

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • Best strategy for buring dual layer DVD

    I've got a project that won't fit on a single layer DVD, so I've got it set up for dual layer and "best quality" which results in a 7.38GB disk image.  However, iDVD 8 warned me when burning the disk images that some utilities may not be able to burn a reliable dual layer disk copy and to use iDVD instead, does this include Disk Utility?
    I always use Disk Utility to make copies, iDVD took almost 7 hours to burn the disk image, I can't afford to wait that long for each copy.  So what's the best strategy for burning dual layer DVDs?

    subsequent copies burn in far less time immediately after the first provide you don't quit iDvd
    If you know ahead of time you'll need more than a few copies, I'd recommend burning from a disc image to the desktop and reducing the burn speed (4x or lower) I prefer to use Roxio Toast myself. But others have had success with Apple's Disc Utilities as well.

  • Absolute Best Strategy for Cropping

    What's the absolute best strategy for high precision cropping when time is not an issue and and want "perfect" quality?
    What's the absolute best strategy for low precision cropping when time is an issue and standards are still high?
    There's gotta be different strategies for doing this, and I was curious if we could choose a winner.
    Thanks,

    Heh, Marian, that about sums it up. 
    In all seriousness...  How does "precision" connect with "cropping"?  I've always thought of "cropping" as most based on an artistic judgment, not something done for precision.  Perhaps you could describe your photography and what you expect from your workflow?
    Are you cropping photos of documents?  People?  Stars, nebulae, and galaxies?
    Is this "precision" you speak of an attempt to express "minimal loss of pixel data"?  If that's the case, get the most powerful computer you can and always work at upsampled resolutions.  And don't resample during cropping.
    One last comment:  Consider employing the crop tool in the Camera Raw dialog.  This stores metadata telling future conversions how to crop the image, but the original raw data is still all intact.  Note that you can open a whole set of images in Camera Raw, do exposure and cropping changes, and click [Done] to save that metadata.
    -Noel

  • Can I access a user guide for 5.1.1 while using my iPad?? I will not have the internet for 10 days so would like to download something.

    Can I access a user guide for 5.1.1 while using my iPad?? I will not have the internet for 10 days so would like to download something before I leave!

    Download the fee iBooks app from the app store if you don't already have it.  Then download the .pdf here: iPad User Guide (For iOS 5.1 Software).  Next, import it to your iTunes library (iTunes>File>Add to Library).  Then sync it to your iPad by selecting it on the Books tab of your iTunes sync settings and syncing your iPad.  It will availble in iBooks in the PDFs section (tap Collections on the top left, then tap PDF's).

  • Best Practise for Data Refresh & Hierarchy

    Hi,
    During a recent discussion with one of our BI user groups, the questions were raised as what the best practice are to handle the following two issues.
    Issue 1:
    If entries are posted to the prior periods in SAP R/3 (outside of the daily auto-refresh range), the current process is that the user group will ask us to conduct a manual refresh in BI for the prior periods which are effected.
    Question: Is it possible to set up a trigger in the system, so that BI knows which periods are changed and automatically refreshes data for those periods?
    Issue 2:
    If a hierarchy used in the reports is modified, there might be an adverse impact on the financial data the user group reports. The current process we have in place is to run a group of BI reports for both current year and prior year to make sure nothing is impacted, but there is limitation to this current process. What if there is no impact on current or prior year, but on the years prior to that?
    Question: What other global companies do to minimize such reporting impact, especially when they have hundreds of complex reports?
    If someone has any info on this, help me in sharing the same.
    Thanks all for your support.
    Regards,
    Murali

    Hi Srini,
    1. SAP suggestes to implement data archiving strategy  as early as possible to manage database growth .
    However pople think of archiving when they realise the  problems like large data volumes,slow system resonse time,performance issues etc...
    2. There is a proper way to implement Data Archiving . Database has to be anaylzed first for getting the top DB tables and Archiving objects.
    3. Based on the DB analysis ,Data archiving plan has to be implenemted according to data management guide.
    4. There is a minimum period known as residence time has to be completed before any data to be archived. Once the document is business completed and serverd its minimum required period in the Database ,it can be archived.
    5, Before going for data archiving there are many steps to be followed like analysis,configuration etc that you can see in details at the link below :
    http://help.sap.com/saphelp_47x200/helpdata/en/2e/9396345788c131e10000009b38f83b/frameset.htm
    let me know if this helps you .
    -Supriya
    Edited by: Supriya  Shrivastava on May 4, 2009 10:38 AM

  • Table Onwers and Users Best Practice for Data Marts

    2 Questions:
    (1)We are developing multiple data marts that share the same Instance. We want to deny access to the users when tables are being updated. We have one generic user (BI_USER) with read access through one of the popular BI Tools. The current (first) data mart we denied access by revoking the privilege to the BI_USER, however going forward with other data marts the tables will get updated on a different schedule and we do not want to deny access to all the data marts. What is the best approach?
    (2) What is the best Methodology for table ownership of tables in different data marts that share tables across marts? Can we create one generic ETL_USER to update tables with different owners?
    Thanx,
    Jim Masterson

    If you have to go with generic logins, I would at least have separate generic logins for each data mart.
    Ideally, data loads should be transactional (or nearly transactional), so you don't have to revoke access ever. One of the easier tricks to accomplish this is to load data into a shadow table and then rename the existing table and the shadow table. If you can move the data from the shadow table to the real table in a single transaction, though, that's even better from an availability standpoint.
    If you do have to revoke table access, you would generally want to revoke SELECT access to the particular object from a role while the object is being modified. If this role is then assigned to all the Oracle user accounts, everyone will be prevented from viewing the table. Of course, in this scenario, you would have to teach your users that "table not found" means that the table is being refreshed, which is why the zero downtime approach makes sense.
    You can have generic users that have UPDATE access on a large variety of tables. I would suggest, though, that you have individual user logins to the database and use roles to grant whatever ad-hoc privileges users need. I would then create one account per data mart, with perhaps one additional account for the truely generic tables, that own each data mart's objects. Those users would then grant different roles different database privileges, and you would then grant those different roles to different users. That way, Sue in accounting can have SELECT access to portions of one data mart and UPDATE access to another data mart without granting her every privilege under the sun. My hunch is that most users should not be logging in to, let alone modifying, all the data marts, so their privileges should reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

Maybe you are looking for

  • SEM CPM  BSC Value Field with AGGREGATION is working wrong

    Hi everyone We have a balanced scorecard defined and working fine until now. Since a week, the value fields with "aggregation" (accummulated each month) is showing the correct value for the periods 2 -12, but the Period 1 is showing the value of the

  • BiBeans Sample Olap Problem

    Hello ; I couldn't run the Bibeans sample. what can I do? which patch i must install? I Below is the output from running bi_checkconfig:, C:\oracle\ora92\bibeans\bi_checkconfig>bi_checkconfig -h oas -po 1521 -sid LOGOB IZ -u BIBDEMO -p BIBDEMO -o LOG

  • FM to update delivery details LIKP-BTGEW and LIKP-ANZPK

    Hello All, Can anyody please tell me a fucntion module to update the fields LIKP-BTGEW and LIKP-ANZPK. I have checked BAPI_OUTB_DELIVERY_CHANGE and WS_DELIVERY_UPDATE. Both these FMs are not having these fields. Request you to please help me out.. Re

  • Apps to mix sound files in real time, using two soundcards?

    Can someone recommend some applications for linux to mix audio files (mp3, ogg) between two soundcards? in realtime... you know, like DJs do, etc... TIA, and sorry for the bad english luciano

  • How to find session is timed out?

    Hi All, How can i find that session is timed out? I want to farword the request to error page if session is timed out. Is there any way other than listener concept and putting some var in session & if i get null while retrieving then session is not v