Best Practices to add second BW system/instance to existing one

Hi Experts,
I need to give my client - different strategies, Best Practice, Precautions, Steps, How to..document,,..... basically prepare methodology considering efforts required ( hardware + time) to develop second instance of BW system.
We want to create one more BW server and i need to create Blue Print for that starting from scratch to end..
Plz help me - -I am not aware of anything related to this...
Regards and thanks in advance
Gaurav

Hi Arun,
We have migrated to BI 7.0, but now since SAP wont be supporting old BW 3.0 - we are thinking to create new Box where would be rewiting the things more efficienty ..using transformation/end routine...clear the things...and get evrything new..
Also we need to think about different geography also..as we have to accomodate different hubs which would be evntually going live later..
Or else we have to change in existing system only...
Also whats difference between adding different clients/sandbox system or different application server...?
I think with different client - you cannot have seperate development but with seperate sandbox you can have...
What abt App Server - i m not sure abt that..
Can you please explains little more but - what would be advantges and wht  efforts(hardware/time) would be required...
Thanks in advance
Gaurav
Message was edited by:
        Gaurav

Similar Messages

  • Info on best practices to add jar references in server.xml

    Hi
    I want to add some jar references (pcl.jar & struts.jar) in server.xml.
    Can someone let me know if this can be added to <shared-library name="global.tag.libraries" version="1.0" library-compatible="true"> ?
    What is the best practice to add such entries in server.xml?
    Thanks
    Badri

    If you want to use it in BPEL it should be placed in oracle.bpel.common
    cheers
    James

  • Best practices for data entry online system

    Hi all
    I am(with a team of 4 members) going to build an online data entry system which may have approximately 30 screens. I am going to use Spring BlazeDS remoting to connect middleware.
    Anyone could please suggest me some good practices to follow in flex side to do such a "DATA ENTRY" application.
    The below points are some very few common best practices we need to follow while doing coding .But i am not sure how to achive them in flex side.
    User experience (Probably i can get little info regarding this from my client)
    Code maintanability
    Code extendibility
    memory and CPU optimization
    Able to work with team members(Multiple checkouts)
    Best framework
    So i am looking for valueble suggestion from great minds.

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • SAP RAR - Best Practice ECC,CRM and BW systems

    Hi All
    i have the requirement to configure RAR for the systems ECC,CRM and BW systems . Each system has only one client . whats the best practice regarding using the rules against each system . i am assuming the rules will be the same irrespective of the system but when i see the names of the initial files , they are system specific . can anybody elloborate around this . thanks
    Regards
    Prasad

    Prasad,
    To build on Chinmaya's explanation, make sure you use a logical system for CRM, BI, and ECC for the basis portion of the rule set (and only the basis portion).  This will keep you from duplicating your rules to meet your basis requirements.  The other rules should be attributed to the individual systems (or additional logical systems if including mult landscapes, ex. Dev, QA, and Prod ECC merged into one ECC logical system).

  • Best Practice: Migrating transports to Prod (system down etc.)

    Hi all
    This is more of a process and governance question as opposed to a ChaRM question.
    We use ChaRM to migrate transports to Production systems. For example, we have a Minor BAU Release (every 2 weeks), a Minor Initiative Release (every 4 weeks) and a Major Release (every 3 months).
    We realise that some of the major releases may require SAP to be taken offline. But what is SAP Best practice for ANY release into Production? i.e. for our Minor BAU Release we never shut down any Production systems, never stop batch jobs, never lock users etc.
    What does SAP recommend when migrating transports to Prod?
    Thanks
    Shaun

    Have you checked out the "Two Value Releases Per Year" whitepaper for SAP recommendations?  Section 6 is applicable.
    Lifetime Support by SAP &amp;raquo; Two Value Releases per Year
    The "real-world" answer is going to depend on how risk-adverse versus downtime adverse your company is.  I think most companies would choose to keep the systems running except when SAP forces an outage or there is a real risk of data corruption (some data conversions and data loads, for example).
    Specific to your minor BAU releases, it may be wise to make a process whereby anything that requires a production shutdown, stopped batch jobs, locked users, etc. needs to be in a different release type. But if you don't have the kind of control, your process will need to allow for these things to happen with those releases.
    Also, with regards to stopping batch jobs in the real world, you always need to balance the desire to take full advantage of the available systems versus the pain of managing the variations.  If your batch schedule is full, how are you going to make sure the critical jobs complete on time when you do need to take the system down?  If it isn't full, why do you need that time?  Can you make sure only non-critical batch jobs run during those times?  Do you have a good method of implementing an alternate batch schedule when need be?

  • Best Practice to Export Solman Customer Data to a new one

    Hi Solman experts,
    We are facing a new project to move a huge Solution Manager VAR Scenario to a new one server, the biggest problems is system landscape from customer data (LMDB) and incident management customer data (ITSM), also mopz, ewa and sap service delivery that has request from customers to SAP and is Stored in solution Manager.
    The information that we have to export to a new Solution Manager is:
    Customers: +300 aprox.
    Systems: +900 aprox.
    - Solution Manager Configuration "Managed system Setup" for around +900 productive systems.
    - Customer data in LMDB and SMSY ( product systems, tech.systems, logical components, solutions, etc... ).
    - VAR Scenario data from: ITSM (with communication with SAP).
    - VAR Scenario data from: Mopz, ewa, Sap engagement and service delivery, Solution Documentation.
    We decided to do that to user a flesh installation of Solution Manager SP11 with better system resources, better than migrate our existing one.
    What do you think that is the best way to migrate customer data from a SAP Solution Manager SP08  to a new server with SAP Solution Manager SP11 ?
    Thanks,
    Lluis

    Hi forum,
    Have you check anytime that SAP package ?     AGS_SISE_MASS
    you can check it runing that webdynpro app:
    http://<server>/sap/bc/webdynpro/sap/wd_sise_mass_conf
    Related SAP Support Notes:
    http://service.sap.com/sap/support/notes/2009401
    http://service.sap.com/sap/support/notes/1728717
    http://service.sap.com/sap/support/notes/1636012
    Regards,
    Lluis

  • Best practices for complex recipe-based system?

    Hi Folks,
    I'm at about the intermediate level (working on my CLD), and tasked with re-vamping a tighly-developed control system(which I'm intimately familiar with) into more of a configurable 'recipe'-based system. Basically the current front-end control software does a lot of the work for the end user - it pre-defines flows, assumes certain electrical/plumbing configurations, etc. This is fine for the 'production floor', however the R&D guys would like something a bit more configurable.
    This system comprises of several flow controllers, mostly controlled/monitored via analog I/O (compact FieldPoint). There's some static analog input channels devoted to temp, humidity, etc. There exists the possibility of 1-2 external RS232 metering devices as well.
    Anyway I'm trying to work out the foundation for the UI. In terms of architecture think a que-ed state machine is my best bet due to the number of parallel processes occuring at once (analog acq, multiple serial comm, TDMS, UI, etc).  Basically I'd like the user to be able to add/remove/modify 'steps'. For instance, "Set Flow: controller IDx, 20cfm", or "Time Delay, Static: 10:00", "Time Delay, Psuedo-Static, based on X". 
    I've worked out a configuration UI (utilizing the in-build NI configuration storage VIs) to associate the analog channels to external devices (ie Aout1="Controller1 SP", Ain1="Controller1 FB"). Later I'll populate a ring control, for instance, for the 'add SetFlow step', to list all of the analog OUTs for selection.
    So I guess what I'm looking for is advice on passing all this info around without having to re-hash it all the time to present to the user. Keeping in enum/ring allows for easy user viewing, changing, and block diagram readbility (vs string constants, which are error prone) - is this something that the 'flatten to string' would be helpful for (something I have no experiance using).
    What tips can you provide for moderate-complexity HMI control systems developed strictly in LV? We currently don't have DSC, and I'm a bit closed-minded about using it for this (but perhaps you can convince me otherwise?).
    Thanks for your time,
    Jamie
    Message Edited by 8bitbanger on 04-21-2010 08:10 AM
    v2009 devel. w/RT

    Cool, thanks for the screenshot!
    This request for more customization was anticipated, so I began working things in last year with other minor revs. The first was this 'hardware configuration' utility. Right now I'm only using the MFC Config page for channel scalling/name info. (the production version still relies on 'static' channel associations to control devices). The enum 'card/slot' selector does exactly as you mentioned - it controls a tab value, which loads other pages (with similar info).
    The second 'generator' page is used to populate a list of generators available for the user to select, and works quite well - users can add somewhat custom generators to the list w/out having to specify "custom" every time (and I don't have to rebuild to add such a simple thing).
    You can see the 'Flow Control' and 'Monitor' channel that have not yet been implimented. :-)
    Lastly the mockup is where I want to end up. I *wish* that labview was able to incorperate enum/ring drop-downs within a table cell (without the hacks that I've seen suggested).
    I intended to setup a similar format for the 'steps' - an Action (or noun as you say), Target (ie file path, device name, etc), Value (setpoint, other pertinent data), etc. Do you pass this info around as a cluster in your VI then simply parse out to the UI in the steps listing? My hurdle is how to ellegantly relate, say, a CSV file back to the enums without a lot of hard-coded (constant) strings.
    Cheers,
    Jamie
    ::edit:: *Finally* found the button to insert images... ::edit::
    Message Edited by 8bitbanger on 04-21-2010 10:30 AM
    v2009 devel. w/RT
    Attachments:
    config_UI.JPG ‏52 KB
    generators.JPG ‏39 KB
    mock-up.JPG ‏33 KB

  • Best practice for upgrading an old system?

    My Archlinux installation seems to have been upgraded over three years ago for the last time. Today, a naive pacman -Syu resulted in a number of file conflict errors and wasn't carried out.
    I then checked the list of announcements since 2011 and identified a few that included the string "manual intervention required". I believe that it was the update of the "filesystems" package that didn't work, again due to conflicts, probably related to the move from /lib to /usr/lib around that time.
    My attempt to update glibc resulted in misconfigured libraries, which took a while to sort out. While I can run commands again, I doubt that my system is in a very healthy state now.
    What should I do, what should I have done to update my Archlinux installation, untouched for 3.5 years?
    Last edited by berndbausch (2014-08-31 04:14:50)

    SoleSoul wrote:If 'pacman -Syu' works now, what makes you ask this question? Is anything still broken?
    Well, I asked the question because nothing worked after following a few of those "manual intervention required" notes. More precisely, the result of the last pacman was that literally no command worked. It turned out that the system didn't find libraries anymore, in particular the loader ld-linux.so. It took me a while to figure this out and to patch the system up enough to have it limp along. Good learning, by the way.
    After that and the suggestion in this forum that a reinstall was the best solution anyway I did just that. Since my only applications were Samba and the acpi daemon, that was not too bad. Unfortunately it's not Archlinux anymore, but Centos, which I am simply more familiar with.

  • Best way to add second monitor with HDMI input

    I have a 24-inch Mid 2007 iMac.  Processor  2.4 GHz Intel Core 2 Duo  Memory  4 GB 667 MHz DDR2 SDRAM  Graphics  ATI Radeon HD 2600 Pro 256 MB.  I need to add a second monitor, and while I'm at it, this new monitor will be an HDTV Monitor with an HDMI input.  My iMac has the original USB connections.  So I'm thinking an ethernet to HDMI adapter should do the trick.
    Is this right?  If so, what supplier makes a good ethernet to HDMI adapter?

    Hi
    I've always thought powered DVI (ADC) was introduced on the Gigabit Ethernet G4s, like your dual 500. It should be easy to check, as the graphics card has an extra power 'stub' which fits into an extra slot on the logic board, between the main AGP slot and the back of the computer. If all the gold connectors on the Radeon 9000 are seated, I think you must have powered DVI/ADC. If some are 'floating' above the logic board, you haven't.

  • Best practice: Webdynpro in a large system landscape

    Dear Sirs,
    I have a few questions about using Webdynpro (WD) in a large system landscape.  After doing some research I understand there are a few alternatives, and I would like to get your opinions on the issue and links to any relevant documentation. I know most of my questions do not have a single answer, but I hope we can get a disussion, which will highlight the pro/cons.
    My landscape consists of a full set of ECC and portal servers (DEV, QA, P) , where using WD to fetch BABI’s from the backend and present them in the portal is a likely scenario.
    <b><i>Deploy the WD components on portal servers or on separate servers?</i></b>
    Would you deploy the WD components on the portal WAS or would you advice having a (or a number) of servers dedicated to running WD.
    The way I see it, when you are having a large number of developers, giving  away the SDM password to the portal server (DEV) in order for them to test WD applications is not advisable (or perhaps more true, not wanted by the basis). So perhaps a separate WAS for development of WD is advisable, and then let basis deploy them into the portal QA and PROD server.  I do not think that each developer having its own local J2EE for testing is likely.
    How about performance?, will any solution be preferable over an other. Will it be faster/slower to run WD on separate WAS.
    <b><i>Transporting the WD components</i></b>
    How should one transport the components and keep them pointing to the right JCO connections (as you have different JCO connections for (DEV, QA, P)), I have seen example with threads where you opt for a dynamic setting of the JCO connections through parameters.  Is this the one to prefer? 
    Any documentation on this issue would be highly appreciated. (Already read: System Landscape Directory, SAP System Landscape Directory on SAP Web AS Java 6.40)

    Look into using NWDI as your source code control (DTR) and transport/migration from dev through to production.  This also will handle the deployment to your dev system (check-in/activate).
    For unit testing and debugging you should be running a local version (NWDW).  This way once the code is ready to be shared with the team, you check it in (makes it visible to other team members) and activate it (deploys it to development server).
    We are currently using a separate server for WD applications rather than running them on the portal server.  However, this does not allow for the WD app to run in the new WD iView.  So it depends on what the WD app needs to do an have access to.  Of course there is always the Federated Portal Network as an option, but that is a whole other topic.
    For JCo connections, WD uses a connection name and this connection can be set up to point to different locations depending on which server it is on.  So on the development server the JCo connection can point to the dev back-end and in prod point to the prod back-end.  The JCo connections are not migrated, but setup in each system.
    I hope this helps.  There is a lot of documentation available for NWDI to get you started.  See:  http://help.sap.com/saphelp_erp2005/helpdata/en/01/9c4940d1ba6913e10000000a1550b0/frameset.htm
    -Cindy

  • Best way to add new dual band Extreme to existing b/g network

    I've been using a Snow Extreme and b/g Express, but have recently been having dropped/slow connection issues. I think this may be at least partially caused from the many other networks and other wireless devices in the neighborhood - I can see 30 or more networks at times. I've also gotten a new Mini and MacBook, both with n wireless, so I decided to get a new dual band Extreme.
    The faster connection speed of the new Extreme is very noticeable on my n capable machines. It also looks like the connection issues I was previously having may have been resolved, but its a bit early to tell for sure.
    First problem I've had is with setting up the guest network. If I attempt to set it up wirelessly, I get as far as changing the settings and restarting the Extreme. Once I do, it does restart, but then Airport Utility is not able to find it after restart. I am able to see the main and guest network in my available networks, but I am unable to join either. Once I turn guest off (via ethernet since Utility isn't able to see it wirelessly) I am again able to see it in Utility and connect to the main network. If I try to turn on guest via ethernet, I get an error and it does not restart.
    Originally the Snow Extreme was the main and the Express was used for wireless printing. My plan was to use the new dual-band as the main, move the Snow to the printer, and use the Express for AirTunes. But now I realize that I'm only able to print and use Airtunes on the main and not the guest network. Since the Snow and Express are b/g, are they going to slow down the main network? I am seeing these as clients in Airport Utility, which I didn't expect. If so, is there a better way to set this up than what I am attempting to do?
    I've got the radio mode set to 802.11n only (5GHz) - 802.11b/g/n. Am I able to set it up so that the n capable clients use the 5GHz band and the b/g clients use the 2.4GHz band so that they don't slow down the n connection, or would I even want to do this?
    Thanks!!!

    The jre is 14,872 KB (j2re-1_4_2_03-windows-i586-p.exe).
    If you silently install the JRE then then licensing doesn't appear to be an issue (no dialogs appear). See http://java.sun.com/j2se/1.4.2/docs/guide/plugin/developer_guide/silent.html. Silent installation is perfect for us since our customers are very nontechnical and would be very confused about the JRE installation dialogs.
    My main question remains: what is the best way to incorporate the installation of the JRE into an existing product. I solved the above-mentioned error 1500 problem by asynchronously starting the silent JRE installation after my product is installed, i.e., after the InstallFinalize step. This results in a bad human interface: the user is prompted with the final dialog with a "finish" button. However the mouse cursor is showing an hourglass intermittenly (kind of like when you logon to Windows after a reboot), for 25-40 seconds after this dialog first appears, until the JRE is installed. I have got to figure out a way to synchronously install the JRE while the user waits.
    I chose to do a private install (search for "private" in http://java.sun.com/j2se/1.4/pdf/plugin-dev-guide-1.4.2.pdf), which works, but I haven't yet figured out the best way to uninstall this JRE -- it appears that simply deleting the directory tree might be the correct way to uninstall.
    I have searched a lot of newsgroups for the "generally accepted" method to incorporate some Java programs into a product (i.e., how to install the JRE). I have not found anything. My conclusion is that I must be doing something that isn't done that often, either that or I've taken a wrong turn.

  • [Swing ADF] add a new row based on existing one

    Hi all,
    I start with ADF and here is my problem:
    In a swing ADF application, I'd like to allow user to create new records based on a record already present in the DB. The user selects a record in the data table and then push a button that fill in some JTextField. He can then change whatever he wants. These JTextField are in fact arguments of a method of my Application Module. Then, when a button "add", that is created by drag and drop the method, is pushed, the method of the AM is called.
    The problem is: argument are null. This occure when I fill in the gap (JTextField) programmatically (by getting the currentRow of the iterator of the table). But if the user fill in the gap himself, then arguments are not null.
    Is there someone to help me?
    Thanks a lot!
    Regards

    Hi,
    I created the method, dragged the arguments as textFields and added the method call as a button.
    I then dragged and dropped teh Departments ViewObject as a table and created a button with the following action code
    private void jButton2_actionPerformed(ActionEvent e) {
    DCIteratorBinding dciter = (DCIteratorBinding) panelBinding.get("DepartmentsView1Iterator");
    Number deptId = (Number) dciter.getCurrentRow().getAttribute("DepartmentId");
    String dname = (String) dciter.getCurrentRow().getAttribute("DepartmentName");
    Number locId = (Number) dciter.getCurrentRow().getAttribute("LocationId");
    ((JUTextFieldBinding) panelBinding.get("deptId")).setInputValue(deptId);
    ((JUTextFieldBinding) panelBinding.get("dname")).setInputValue(dname);
    ((JUTextFieldBinding) panelBinding.get("locationId")).setInputValue(locId);
    panelBinding.refresh();
    Pressing the button copies the values to the input arguments and pressing the method button sends the values to the method
    Frank

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • Best practice: Developing report in Rich Client or InfoView?

    Hi Experts,
    I have a question on the best practice of developing webi reports.
    From what I know, a Webi report can be created in Rich Client and then exported to one or more folders. From InfoView, the report can also be changed, but the change is only local to the folder.
    To simplify development and maintenance, I believe both creation and change should be done solely in either Rich Client or InfoView. However, some features are only available in InfoView, not in Rich Client. One example is hyperlink for another Webi report. As a second step, I can add the extra features in InfoView after the export. However, if I change the report in Rich Client and re-export it, the extra features added via InfoView (e.g. report hyperlink) will be overwritten.
    As I'm new to BO, may I have some recommendations on the best practice for building reports? For instance:
    1) Only in Rich Client - no adding of feature via InfoView
    2) First in Rich Client, then in InfoView - extra features need to be added again after each export
    3) Only in InfoView -  all activities done in InfoView, no development in Rich Client
    4) Others?
    Any advice is much appreciated.
    Linda
    Edited by: Linda on May 26, 2009 4:28 AM

    Hi Ramaks, George and other experts,
    Thanks a lot for your replies.
    For my client, the developers will build most of the reports for regular users to view. However, some power users may also create their own reports to meet ad-hoc reporting requirements.
    It's quite unlikely for my client to develop reports based on Excel or CSV data files. And we need to use features such as hyperlink for documents (which is not available in Rich Client). Based on these considerations, I'm thinking of doing all development in InfoView (both developers and power users). Do you foresee any issue if I go for this approach?
    Thanks in advance.
    Linda

  • Best practice "changing several related objects via BDT" (Business Data Toolset) / Mehrere verbundene Objekte per BDT ändern

    Hallo,
    I want to start a
    discussion, to find a best practice method to change several related master
    data objects via BDT. At the moment we are faced with miscellaneous requirements,
    where we have a master data object which uses BDT framework for maintenance (in
    our case an insured objects). While changing or creating the insured objects a
    several related objects e.g. Business Partner should also be changed or
    created. So am searching for a best practices approach how to implement such a
    solution.
    One Idea was to so call a
    report via SUBMIT AND RETURN in Event DSAVC or DSAVE. Unfortunately this implementation
    method has only poor options to handle errors. Second it is also hard to keep LUW
    together.
    Another idea is to call an additional
    BDT instance in the DCHCK-event via FM BDT_INSTANCE_SELECT and the parameters
    iv_xpush_classic = ‘X’ and iv_xpop_classic = ‘X’. At this time we didn’t get
    this solution working correctly, because there is always something missing
    (e.g. global memory is not transferred correctly between the two BDT instances).
    So hopefully you can report
    about your implementations to find a best practice approach for facing such
    requirements.
    Hallo
    ich möchte an der Stelle eine Diskussion starten um einen Best Practice
    Ansatz zu finden, der eine BDT Implementierung/Erweiterung beschreibt, bei der
    verschiedene abhängige BDT-Objekte geändert werden. Momentan treffen bei uns
    mehrere Anforderungen an, bei deinen Änderungen eines BDT Objektes an ein
    anderes BDT Objekte vererbt werden sollen. Sprich es sollen weitere Objekte geänderte
    werden, wenn ein Objekt (in unserem Fall ein Versicherungsvertrag) angelegt
    oder geändert wird (zum Beispiel ein Geschäftspartner)
    Die erste unserer Ideen war es, im Zeitpunkt DSAVC oder DSAVE einen
    Report per SUBMIT AND RETURN aufzurufen. Dieser sollte dann die abhängigen Änderungen
    durchführen. Allerdings gibt es hier Probleme mit der Fehlerbehandlung, da
    diese asynchrone stattfinden muss. Weiterhin ist es auch schwer die Konsistenz der
    LUW zu garantieren.
    Ein anderer Ansatz den wir verfolgt hatten, war im Zeitpunkt
    DCHCK per FuBA BDT_INSTANCE_SELECT und den Parameter iv_xpush_classic = ‘X’ and
    iv_xpop_classic = ‘X’ eine neue BDT Instanz zu erzeugen. Leider konnten wir diese
    Lösung nicht endgültig zum Laufen bekommen, da es immer Probleme beim
    Übertragen der globalen Speicher der einzelnen BDT Instanzen gab.
    Ich hoffe Ihr könnt hier eure Implementierungen kurz beschreiben, dass wir
    eine Best Practice Ansatz für das Thema finden können
    BR/VG
    Dominik

Maybe you are looking for

  • Charts in Flex 3 compatibility mode are hosed

    I spent an 11 hour day struggling to get a LineChart working in Flex 3 compatibility mode that worked fine otherwise. I encountered a multitude of problems, spent hours tracing through Flex code, and finally have something almost working except for m

  • Best Practice/Idea - purchasing of assembly

    Hello, we are looking for way how to buy/purchase configurable assembly where some parts of the assembly will be provided by our company for the vendor and other parts will be provided by the vendor itself. Then we will receive a completed assembly f

  • Max Size for FW Drive 128GB?

    Hi Does anyone know if the 128GB drive limit on the B&W applies to external drives connected via the FW ports? Thanks in advance! Bob

  • Work Flow Management in SAP EHS

    Dear Friends Are there general requirements of work flow management in SAP EHS. Actually, i dont have much idea on this. Please guide me. As far as my understanding goes, i know one process where EHS manager/officer can give approval for work permit

  • Ungraded to IOS 8.2 and mail settings changed and outgoing mail will not work. Setting seem to me correct.

    IOS 8.2 did something to my Gmail settings for my outgoing mail. Check the setting and they seem to be correct. Anyone know what else the download may have affected in Gmail or email setting in general? Iphone 6 Plus