To handle datasources in downtime.

Hi,
How to handle different data sources, if the source system has downtime?
Specially LIS datasources.
Any document related to this would be helpful
Regards,
San

San,
LIS or LO datasources...?
For LO Datasource:
Before system
1.  Run V3 job and push entire data from Delta Queue(LBWQ) to Update queue(RSA7)
2.  down pull all delta into bw side(run infopackage atleast 2 times) till update queue is empty.
3. Stop V3 jobs.
Once system is up:
4. Check initialization is ok or not..?
5. If not initialize without datatransfer
6. Start V3 job as per schedule..
7. Continue deltas.
Hope it Helps
Srini

Similar Messages

  • Can l make an instance of EJB home handler / Datasource shared by EJB objs

    all ejb guru
    As far as l know, it is general rule of thumb to cache the EJB home handlers and datasource object at EJB bean instance. But can l go a step further, make the same instance of EJB home handlers and datasource object shared by more than 1 EJB bean instance ?
    ( l checked out the methods of javax.ejb.EJBhome and javax.sql.DataSource. They are not declared as synchronized. It seems to me that EJB home handlers and datasource object are not thread safe and not suitable for being shared. Even they are thread-safe, if they are shared by 1+ EJB bean instance. Sharing EJB home handlers and datasource object may interfere the thread management of EJB container. Am l correct ? )
    It is highly appreciated someone can share with me your insight in this issue.
    thanks & regards
    Danny

    Okay, you got me there. However, it's usually a better practice to start a new thread with your specific question instead of resurrecting old threads that (obviously) nobody cared enough to answer.
    DataSources are retrieved from the container via JNDI. You may (generally) cache them in order to alleviate the JNDI lookup without worrying about Threads, as there is only (usually) one DataSource object per server (or node in a cluster). Do not cache (in your code) the Connection objects obtained from the DataSource - always close them in order to return them to the pool maintained by the DataSource.
    Caching EJB HomeHandles is the accepted manner of avoiding repeated JNDI calls to locate EJBs. Typically, extracting the EJBHome from the HomeHandle re-initializes whatever network operations are embedded in the EJBHome object(s) by the vendor's implementation. Once again, you do not necessarily need to worry about Threads - the container and vendor implementation is already taking care of that for you, transparently.
    Think about it: if there were Threading issues, even retrieving the above objects via JNDI would ensure that J2EE application servers wouldn't be Thread safe and all operations would be, essentially, blocked each time. Obviously, this is not the case since both DataSources and EJBHome objects can have multiple clients using them simultaneously. Just because there's not explicit synchronized tag on any of the methods defined in these interfaces doesn't mean they're single-Thread objects.

  • Downtimes, how do you handle them?

    I was just wondering how the rest of the Nation handles their IronPort downtimes? Are they a big production, scheduled, notification to clients, approved by management? Or are they stealthy, done behind the scenes, nobody is aware of the changes?
    I can see pro's and con's to each way and I'm just wondering what the rest of the community does.
    What I'm currently facing is our department is trying to minimize scheduled downtimes as much as possible. I was just told that my downtime window will coincide with our Active Directory infrastructure downtimes. My concern is if something breaks, was it a domain controller issue or was it a IronPort issue?
    What are everyone else's thoughts?

    I don't think one is necessarily exclusive of other methods. I prefer to use a solution that fits the problem at hand.
    I was just wondering how the rest of the Nation handles their IronPort downtimes? Are they a big production, scheduled, notification to clients, approved by management? Or are they stealthy, done behind the scenes, nobody is aware of the changes?

  • Move large database to other server using RMAN in less downtime

    Hi,
    We have large database around 20TB. We want to migrate (move) the database from one server to other server. We do not want to use standby option.
    1)     How can we move database using RMAN in less downtime
    2)     Other than RMAN is there any option is available to move the database to new server
    For option 1 (restore using RMAN),
    Whether below options are valid?
    If this option is valid, how to implement this?
    1)     How can we move database using RMAN in less downtime
    a)     Take the full backup from source (source db is up)
    b)     Restore the full backup in target (source db is up)
    c)     Take the incremental backup from source (source db is up)
    d)     Restore incremental backup in target (source db is up)
    e)     Do steps c and d, before taking downtime (source db is up)
    f)     Shutdown and mount the source db, and take the incremental backup (source db is down)
    g)     Restore last incremental backup and start the target database (target is up and application is accessing this new db
    database version: 10.2.0.4
    OS: SUN solaris 10
    Edited by: Rajak on Jan 18, 2012 4:56 AM

    Simple:
    I do this all the time to relocate file system files... But the principle is the same. You can do this in iterations so you do not need to do it all at once:
    Starting 8AM move less-used files and more active files in the afternoon using the following backup method.
    SCRIPT-1
    RMAN> BACKUP AS COPY
    DATAFILE 4 ####"/some/orcl/datafile/usersdbf"
    FORMAT "+USERDATA";
    Do as many files as you think you can handle during your downtime window.
    During your downtime window: stop all applications so there is no contention in the database
    SCRIPT-2
    ALTER DATABASE DATAFILE 4 offline;
    SWITCH DATAFILE 4 TO COPY;
    RECOVER DATAFILE 4;
    ALTER DATABASE DATAFILE 4 online;
    I then execute the delete of the original file at somepoint later - after we make sure everything has recovered and successfully brought back online.
    SCRIPT-3
    DELETE DATAFILECOPY "/some/orcl/datafile/usersdbf"
    For datafiles/tablespaces that are really busy, I typically copy them later in the afternoon as there are fewer archivelogs that it has to go through in order to make them consistent. The ones in the morning have more to go through, but less likelihood of there being anything to do.
    Using this method, we have moved upwards 600G at a time and the actual downtime to do the switchover is < 2hrs. YMMV. As I said, this can be done is stages to minimize overall downtime.
    If you need some documentation support see:
    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asm_rman.htm#CHDBDJJG
    And before you do ANYTHING... TEST TEST TEST TEST TEST. Create a dummy tablespace on QFS and use this procedure to move it to ASM to ensure you understand how it works.
    Good luck! (hint: scripts to generate these scripts can be your friend.)

  • How to handle/extract Hierarchies in master data from R3

    Hi Experts!!!
    Iu2019m currently working on HR Implementation project and this is my first project. Implementation is being done in BI 7.0 and ECC 6.0. I have some queries  regarding the extraction of master data from R3 as furnished below.
    1.     By default all the datasources  are in 3.x technology  and Iu2019ve migrated all the datasources with export and successfully performed InfoPackages  and  DTPS  into their respective InfoObjects/Data targets. But when it comes to the masterdata datasources with hierarchies, Iu2019m unable to migrate them as there is no MIGRATE option in the context menu. When I tried to create infopackage, it asked me to create Transfer rules first and perform InfoPackage. Is it mandatory to work in 3.x technology (Transfer rules/ Update rules)  in case of master data hierarchies? In this case how to handle datasources with hierarchies?
    2.     Is it required to have any data staging component  like DSO while extracting master data datasources from SAP R/3  to their respective InfoObjects (Data targets)?? Or can I directly schedule Infopackage and DTP directly into the InfoObject with out any DSOS in between for HR master data??
    Good day!!!

    hello,
    1)
    all DataSource that are created in BI 7.0 are new design, master data attribute, text and transaction DataSource can all be created in the new BI 7.0, but for hierarchy DataSource, it remains as 3.X, if you double click on the Hier DataSource, you will see the title "...Emulated 3.X ...", so the DataSource should be working perfectly fine except it is in "Modified" version, i believe it is just SAP has not or can't a way to upgrade hier DataSource.
    Also,
    RSDS doesnt support hierarchy datasources in BI 7.0. So we cannot use transformations and DTP with hierarchy. So it is very similar to the way we used to load it in 3.x.
    Check below link for step by step procedure to load hierarchy using flat file:
    http://help.sap.com/saphelp_nw04s/helpdata/en/fa/e92637c2cbf357e10000009b38f936/content.htm
    2)
    i dont think you need a DSO in between for laoding master data from R/3 to BI IO.
    Reg,
    Dhanya

  • Database access from session bean

    Hello,
    I have a stateless session bean which performs some complex
    calculations, and also does some database access.
    For the database access the bean class has a datasource as
    follows:
    public class TestBean implements SessionBean {
    private DataSource ds_;
    public void ejbCreate() {
         getDataSources();
    private void getDataSources() {
         try {
         Context ictx = new InitialContext();
         ds_ = (DataSource)ictx.lookup("java:comp/env/jdbc/TestDB");
         } catch (Exception e) {
         e.printStackTrace();
         throw new EJBException(e);
    Now this class has a method (which is also in the remote interface)
    calculateSomething(). This method constructs a number of other
    objects that do the actual calculation, and one of these objects
    does the actual database access. How would another object be able to
    use the datasource that was constructed in the bean class?
    I could pass the datasource reference to that object, but that would
    break my encapsulation. This is because that object does not get
    created directly by the bean object, but rather the way the objects
    interact is something like A -> B -> C, where A is the TestBean, and
    C is the object that does the DB access. If I passed the datasource,
    I would need to make B aware of the datasource, which doesn't
    seem good design, because B doesn't do any database access.
    Alternatively I could do the lookup in class C, but that would
    degrade the performance, as an object C gets created and destroyed
    every time the calculateSomething() method is called.
    A third option I have thought of, is to add a public method to the
    bean that returns a connection. Whenever another object gets
    created, a reference to the bean object will be passed along. Then,
    if another object needs to do database access, it will call back
    the bean to get a connection. This seems just as bad (if not worse)
    than the first option.
    Does anyone have an elegant solution for this situation? What is
    the best practice of handling datasources when a bean class doesn't
    do the database access itself? In all the examples I've seen so far,
    all the functionality was in the session bean class, but again that
    doesn't seem good OO design, and would result in a single huge class.
    regards,
    Kostas

    Thanks again to both for the replies. Here are my responses:
    Yi Lin: Yes, I know that an entity bean would solve this problem, however it has been decided not to use entity beans so this is not my call (I think the reason entity beans are not allowed in this project is that they are considered risky: there are other applications that access the same database, so if the container caches entity bean data as you describe, then the users might get inconsistent results).
    Gerard: Actually object B is the one that has the business logic and C is a peer object that only does database access and no calculaitons. For example B can be Customer, and C CustomerDB. This is why object B does not have any knowledge of datasources or connections. So my design does not appear to be that bad!
    As far as the factory you propose is concerned, I cannot understand how this would solve my problem. In order to solve this situation the factory would need to be persistent, i.e. get created by the ejbCreate() method, and destroyed whenever the container decides to destroy the bean. There would be no point in object C creating the factory, as I would have the overhead of doing the JNDI lookup every time I create a C.
    So the question remains the same: how would I pass a reference to the factory from A to C without making B aware of it?

  • Retriving .doc and .rtf files in soap attachments

    Please help me
    I'm using JAXM to retrive soap attachments
    When i retrive txt files (.txt) it retrivs the content of the document
    without any problem but when i try to retrive .doc or rtf
    it gives content as
    java.io.FileInputStream@587c94
    (retrive attached files which are recived from client)
    how do i get the content from word or rtf document
    this is the way i tried to get the content
    while (it.hasNext()) {
    AttachmentPart ap = (AttachmentPart)it.next();
    contentType = ap.getContentType();
    content =(String)ap.getContent();
    p.println("content---->"+content);//wrriting to text file
    System.out.println("*** attachment content: " + content);
    thanks nams

    Here is the code I used to send and receive a PDF file as a SOAP attachment. Note that if the sender uses a DataHandler for the attachment's content, there's no need to set the MIME type explicitly because the DataHandler does it for you.
    * Sender
    // create the data source and data handler
    DataSource source = new FileDataSource("form.pdf");
    DataHandler handler = new DataHandler(source);
    // create attachment for message
    AttachmentPart attachment = message.createAttachmentPart(handler);
    // set content id (optional)
    attachment.setContentId("enrollment_form");
    // add attachment to message
    message.addAttachmentPart(attachment);
    // send message
    providerConnection.send(message);
    * Recipient
    public class Receiver extends JAXMServlet implements OnewayListener {
       public void onMessage(SOAPMessage message) {
          // get attachment
          Iterator it = message.getAttachments();
          AttachmentPart attachment = (AttachmentPart) it.next();
          if (attachment.getContentType().equals("application/pdf")) {
              // read contents into byte buffer
              ByteArrayInputStream contentStream =
                  (ByteArrayInputStream)attachment.getContent();
              // use standard Java I/O methods to save in file
              int bytesToRead = contentStream.available();
              byte[] buffer = new byte[bytesToRead];
              contentStream.read(buffer);
              // write buffer to new file
              FileOutputStream file = new FileOutputStream("form.pdf");
              file.write(buffer);
              file.close();
              log("Attachment " + attachment.getContentId() + " with type "
                  + attachment.getContentType() + " written to form.pdf");
          else {
              log("attachment content has MIME type " + attachment.getContentType()
                + ", Java type " + attachment.getContent().getClass());
    }

  • How to handle multiple datasources in a web application?

    I have a J2EE Web application with Servlets and Java ServerPages. Beside this I have a in-house developed API for certain services built using Hibernate and Spring with POJO's and some EJB.
    There are 8 databases which will be used by the web application. I have heard that multiple datasources with Spring is hard to design around. Considering that I have no choice not to use Spring or Hibernate as the API's are using it.
    Anyone have a good design spesification for how to handle multiple datasources. The datasource(database) will be chosen by the user in the web application.

    Let me get this straight. You have a web application that uses spring framework and hibernate to access the database. You want the user to be able to select the database that he wants to access using spring and hibernate.
    Hopefully you are using the Spring Framework Hibernate DAO. I know you can have more that one spring application context. You can then trying to load a seperate spring application context for each database. Each application context would have it's own configuration files with the connection parameters for each datasource. You could still use JNDi entries in the web.xml for each datasource.
    Then you would need a service locater so that when a user selected a datasource he would get the application context for that datasource which he would use for the rest of his session.
    I think it is doable. It means a long load time. And you'll need to keep the application contexts as small as possible to conserve resources.

  • How to handle the Timestamp datasource while migration

    Hi All
    " I tried searching the forum,but didnt get the relevant one,hence posting this question".
    While migration from 4.7 to ECC in R/3 system,
    how to handle the Timestamp datasources( especially FI like COPA and other ) while extarcting the data from source to BW during source system migration.Since we need to empty the delta queue and should make sure that there are no delta records exists in delta queue.
    Like for sales datasources using LO,we will be executing the V3 jobs for execting the LUW's from LBWQ to RSA7?  In the same way is there any particular way for this also.
    Anyone whi knows abt this pls share your views
    Regards
    Shankar

    Hello Shankar.
    Before the upgrade (import of a queue), all extraction queues and open
    update orders in all clients must be processed. The content of the setup
    tables must be deleted. To avoid problems during the upgrade or to
    correct them, carry out the following steps:
    1. Call transaction SMQ1 and check whether all queues in all clients
    (client = '', queue name 'MCEX') have been processed. To process the
    queues, start the collective run report for each application in the
    displayed clients. If you no longer need the data in the BW system,
    deactivate the relevant extraction queues and DataSource in the LO
    cockpit (transaction LBWE) and delete the queue entries in transaction
    SMQ1.
    2. If you use the V3 update that is not serialized (usually only for
    application 03): Start collective run report RMBWV303. Then check the
    update orders in transaction SM13. If there are incorrect update orders
    in transaction SM13, correct the orders and then start the collective
    run report again. If you no longer require the update orders, you can
    delete them. There may be inconsistencies between tables VBMOD and
    VBHDR. For further information about this, see Notes 652310 and 67014.
    3. Before the upgrade, delete the contents of the setup tables. Execute
    report RMCEX_SETUP_ENTRIES to find out which setup tables still contain
    entries. You can use transaction LBWG to delete the contents of the
    setup tables for all clients.
    Unfortunately the check that the system carries out during the upgrade
    or when you import a Support Package does not display all affected
    applications. Therefore, Note 1083709 provides a check report that you
    can use to determine all affected applications and tables or queues.
    More detailed information please check these following notes:
    1083709-Error when you import Support Packages
    1081287-Data extraction orders block the upgrade process
    I hope I can be helpful.
    Thanks,
    Walter Oliveira.

  • Concept: How can I handle a hole in a non-sap datasource?

    Hello,
    I want to discuss here a problem with non-sap datasource:
    We load data from oracle db with db-connect technique. Each record we load has an idate(record is created) and a udate(record is changed). Based on the udate we create a kind of delta load:
    The udate is selection criteria of the Infopackage(full upload)
    In the start routine of the transferrule we detect the oldest udate and store it to the tvarc table. This UDATE is the low selection criteria for udate of the next load. So we reload only the records, which are change after the last load.
    The data are transferred to ODS (overwrite) and so on..
    That works perfect!
    But now we find out, that in this non-sap datasource it’s possible to delete records directly. In SAP we have usually the procedure that a reversal document is created to delete a record(for example FI documents). In fact this non-sap datasource creates a hole in the database. A record is deleted – > no udate change -> no change in BW. That means the record is still in BW!
    Do you had a similar problem? Or do you have an idea how we could fix this problem.
    I can not load all data every day by full upload – that needs to much time more than 2 million data from five different datasources….
    Thank you for your attention
    Ralf

    Hello,
    To close my post, here is my solution:
    - ODS A is filled by delta using the UDATE field.
    - ODS B is filled by full upload. It contents only the key fields of ODS A (load needs
      only 20 minutes)
    - ODS C is filled by ODS A(only key fields) per full upload. In the startroutine of the
      update rule is a check: Delete all data, which are in ODS B.
      Result is: ODS C contents only the data, which have to be deleted in ODS A.
    - Full upload ODS C to ODS A. Set in the start routine recordmode to 'R'.
      Result is: these records are delete from the ODS A and change log of ODS A is
      updated.
    Before the next load the content of ODS B and C is deleted.
    (cool picture:)
    ..........CUBE X
    ..............|
    .(delta)....|
    ..............|........ODS C(diff. A - B)
    ..............|........|....|
    ..............|..(R).|....|.\  (check)
    ..............|........|....|..\
    ............ODS A....ODS B(keys)
    .(delta)...|................| (full)
    ..........non sap systems
    It's followed the principle of Sudhi's idea. But I did it not with PSA, because I have five different DataSources, so I need five calls to identify the records, which have to be deleted and five calls to edit the PSA ....
    In this way I load everything in ODS and do the procedure only once time.
    All the best Ralf

  • Is connecing thru a datasource slower then handling connections thru code?

    We have an application that connects to a database using the latest Oracle 10g thin drivers. When a connection is made the response from the database takes anywhere from 15 - 25 seconds. When we make a connection thru the code base using the same connect string it takes less then 1 second. Is this normal?
    We see that the response comes back in 30 packet chunks over the network regardless of the size of the query, it just show up more the larger the query is. Is this 30 byte chunk set a WebLogic limitation? Can it be bypassed?

    Is yours an external application, either making direct JDBC connections to the DBMS or using a WebLogic server and a WebLogic DataSource?
    If so, then the result is expected. Going from application to weblogic and then to the DBMS is expected to be slower than going directly from application to DBMS.

  • Posting block Question in How to handle inventory management scenarios

    Hi all - I have a question in document
    <b>"How to...Handle Inventory Management Scenarios in BW"</b>
    can anyone please tell me what a posting block is in this document...
    also I did not understood what a Validity table is?
    thanks,
    Sabrina.

    Hi Sabrina!
    A 'posting block' period is something that now you are facing in inventory management, but you have to consider it in ALL logistic cockpit flows (what you can see in LBWE).
    This requirement can be easily explained in this way: as you know, to fill a setup table you have to activate the extract structure of the datasource for which you want to run the setup job (otherwise the system says that no active extract structure exists, do you remember ?); but, by doing so with the posting operations open (in other words, users can create new document or change the existing ones), you run the risk that these records are automatically included in the extraction queue (or in SM13 if you are using unserialized method) AND also collected in setup table ! And you will have the same records with the initial load (from setup table) and then the same from delta load (from the queue): a nice data duplication.
    To avoid this situation, a "posting block" (or a "free-posting period" or a "downtime") is requested. No one can post new document during your setup job...
    Hope now is clearer...
    Bye,
    Roberto

  • Error while creating a datasource in planning 9.3.1 on oracle 11.2 database

    I am unable to create datasource in planning 9.3.1 on oracle 11.2 database. I have configure sharedservices and registered planning with shared servers. I am unable to create data source after application deployment and instance creation.
    I am getting the following error,
    Launching Hyperion Configuration Utility Program
    HYPERION_HOME: C:\Hyperion
    In HspDBPropertiesLocationPanel constructor
    In HspDBPropertiesLocationPanel queryEnter
    Resource Bundle is java.util.PropertyResourceBundle@322394
    Product Name in file is PLANNING
    Availability Date is 20051231
    Creating rebind thread to RMI
    Resource Bundle is java.util.PropertyResourceBundle@322394
    Product Name in file is PLANNING
    Availability Date is 20051231
    $$$$$$$$$$$$$ dname is
    Resource Bundle is java.util.PropertyResourceBundle@322394
    Product Name in file is PLANNING
    Availability Date is 20051231
    Exception in thread "AWT-EventQueue-0" java.lang.UnsatisfiedLinkError: no HspEss
    baseEnv in java.library.path
    at java.lang.ClassLoader.loadLibrary(Unknown Source)
    at java.lang.Runtime.loadLibrary0(Unknown Source)
    at java.lang.System.loadLibrary(Unknown Source)
    at com.hyperion.planning.olap.HspEssbaseEnv.<clinit>(Unknown Source)
    at com.hyperion.planning.olap.HspEssbaseJniOlap.<clinit>(Unknown Source)
    at com.hyperion.planning.HspJSHomeImpl.TestEssConnection(Unknown Source)
    at com.hyperion.planning.HspDSEssbasePanelManager.TestEssConnection(HspD
    SEssbasePanelManager.java:156)
    at com.hyperion.planning.HspDSEssbasePanelManager.queryExit(HspDSEssbase
    PanelManager.java:132)
    at com.hyperion.cis.config.wizard.ProductCustomInputPanel.queryExit(Prod
    uctCustomInputPanel.java:114)
    at com.installshield.wizard.awt.AWTWizardUI.doNext(Unknown Source)
    at com.installshield.wizard.awt.AWTWizardUI.actionPerformed(Unknown Sour
    ce)
    at com.installshield.wizard.swing.SwingWizardUI.actionPerformed(Unknown
    Source)
    at com.installshield.wizard.swing.SwingWizardUI$SwingNavigationControlle
    r.notifyListeners(Unknown Source)
    at com.installshield.wizard.swing.SwingWizardUI$SwingNavigationControlle
    r.actionPerformed(Unknown Source)
    at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
    at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
    at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(Unknown Sour
    ce)
    at java.awt.Component.processMouseEvent(Unknown Source)
    at javax.swing.JComponent.processMouseEvent(Unknown Source)
    at java.awt.Component.processEvent(Unknown Source)
    at java.awt.Container.processEvent(Unknown Source)
    at java.awt.Component.dispatchEventImpl(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Window.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.EventQueue.dispatchEvent(Unknown Source)
    at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.run(Unknown Source)
    But, My essbase server is up and running. I am able to connect it through EAS.

    It looks like more of an issue with connecting to essbase, usually "java.lang.UnsatisfiedLinkError: no HspEssbaseEnv in java.library.path" means planning has not been installed or deployed correctly, what OS is it running on?
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Error while transporting Generic Datasource in R/3

    Hi All,
    I have created a generic datasource(R/3) on a view. As it was in $TMP, I have assigned the datasource, ZABCD, and the view, ZVIEW to a devlopment package and also to a transport request. I released the Transport request and when I tried to import it into Q, it throwed me an error with below messages:
    Extract structure ZOX**** is not active
    The extract structure ZOX**** of the DataSource ZABCD is invalid
    Errors occurred during post-handling RSA2_DSOURCE_AFTER_IMPORT for OSOA L
    The errors affect the following components:
    BC-BW (SAP Business Information Warehouse Extractors)
    The extract structure ZOX**** is the structure used by the datsource. I forgot to assign that to the earlier TR and I think the error is because the structure is missing in the TR. Am I correct?
    How should I deal with it now? DO I need to create a new TR with this structure alone as the earlier TR is already released?
    Thanks,
    RPK.

    As you comment it is possible error was not include the structure in earlier TR.
    You can create, in se09 or se10, an empty TR. In these transactions you have a buttoun with a box drawed. Set focus on your new empty TR and push button.
    Add object from TR transported with error and the missed structure.

  • DataSource for FAGLFLEXT and BSEG, or New Table in ECC6?

    need to create an extractor to have all the information of FAGLFLEXT, because we need to keep the ledger information and the split of the information. However, we need to add 13 fields contained in BSEG.
    Therefore we thought to reads the line items table FAGLFLEXA, and then enhace it throught BSEG table.
    However, since we are using ECC6 and BI7. It is not support the creation of DataSources for FAGLFLEXA throught FAGLBW03.
    Is it an option to incorporate all fields into FAGLFLEXT.
    Can we creat a new table group based on FAGLFLEXT, and then adding the coding block extensions to that table -
    how does new g/l and the new table group work in parallel? Which is the procedure to do it?
    Documentation says we can create a new table group based on FAGLFLEXT --- its the how does it work in conjuction part...for example...the new g/l handles document splitting and one other thing georg referenced last night...will the split documents go into our new table group?
    BSEG does not have the document splitter information that we need (it's incomplete data). It's missing profit centers on many items, it's missing the proper split of transactions.
    Thanks for your comments.

    Here is more information about this post.
    Client situation:  Our client is implementing ECC 6 and is using the "New-GL" features.  Because of business requirements, the coding block has been extended (not insignificantly - 18 extra fields at the moment) to accommodate legal, regulatory and management reporting.  The reporting solution includes standard ECC reporting (e.g. report writer, report painter reports) as well as feeds to BW (BI 7).
    The Challenge:  Our understanding is that adding all of the coding block extensions to the New-GL tables (ie. FAGLFLEXA and FAGLFLEXT) may lead to performance degradation in the ECC system.  However, we still need to accommodate the requirement to report by the additional dimensions that are not currently included in the New-GL, so our challenge has been to find a solution that minimizes performance issues, while still allowing us to have all the necessary dimensions with which to do the required reporting.
    What we would like to know:  How have you handled this in similar situations?
    Have you added to the New-GL tables? How many fields? Performance issues encountered?
    Have you created additional table group(s) based on the New-GL and then modified that structure to have the new fields?  How does the additional table group work co-incident with New-GL (e.g. does the additional table group receive document splitting information?)?
    Have you created custom extractors for BW?  On what basis (we understand that FAGFLEXA cannot be created as a datasource to feed BW)?

Maybe you are looking for

  • IPad video app watched status

    Hi, I have been using the iPhone to watch movies on and have upgraded to the iPad. On the iPhone, beside the movie was a blue cirle indicating whether it had been watched, partially watched or not watched. On the iPad there is nothing of the sort. It

  • Controlling print options in acrobat

    Hi, I am new to this forum. Just started using acrobat at work and i need some help. I was wondering if it is possible to set the print feature to make a pdf not printable until you select a checkbox in it, then the pdf would be able to print. If thi

  • Rv082 get down about 2-3 times in a week

    Hello, i have 1 head office ( rv082 ) and about 14 branches (13 rv042 and 1 rv082 ) which are connected by VPN Tunnels. About 2 weeks ago my primary RV082 starts FAILs about 2-3 times in a week, but in local network i can connect to it. Tell me pleas

  • Aol email password does not stay in phone

    Aol works now for about 5 minutes on my phone after I delete and reinstall.  I get the following email from Verizon; Your account [email protected] could not be authenticated. If you have recently changed your email password, please enter the most cu

  • How to see the contents of jar file?

    Hi is there any way we can see the contents of jar file.(Since it is similar to zip file can we unzip it and see the contents)