Best practice to Consolidate forecast before loading into ASCP

Hi,
Can anyone suggest best practice to consolidate forecast that is in spreadsheets. Forecast comes from Sales, Finance and Operations. Then consolidated forecast should be loaded into ASCP.
Is there any way to automate the load?
Is Oracle S&OP best product out there?
Do we need Oracle Demand management also?
Regards

Forecast comes from Sales, Finance and Operations (spreadsheets)
-> Using integration interfaces to load the data in to three different series sales fcst, finance fcst and ops fcst
Then consolidated forecast should be loaded into ASCP.
-> create a workflow/ii to load the consolidated of the 3 series
So this can be done in DM.
Also a standard workflow exists in S&OP to publish consensus forecast to ASCP which accomplish your objective.

Similar Messages

  • ODI : how to raise cross reference error before loading into Essbase?

    Hi John .. if you read my post, I want to say that you impress me! really, thank for your blog.
    Today, my problem is :
    - I received a bad quality data file from ERP extract
    - I have cross reference table (Source ==> Target)
    - >> How to raise the error before loading into Essbase !
    My Idea is the following, (first of all, I'm not sure if it is a good one, and also I meet issue to do it in ODI !)
    - Step 1 : make JOIN between data.txt and cross-reference Table ==> Create a table DATA_STEP1 in the ODISTAGING schema (the columns of DATA_STEP1 are the addition of columns of data.txt those of cross-references Tables (... there is more than 20 columns in my case)
    - Step 2 : Control if there is no NULL value in the Target Column (NULL means that the data.txt file contains value that are not defined in my cross reference Table) by using Filter ( Filter = Target_Account IS NULL or Target_Entity IS NULL or ...)
    The result of this interface is send to reject.txt file - if reject.txt file is not empty then a mail is sent to the administrator
    - Step 3 : make the opposite : Filter NOT (Target_Account IS NULL or Target_Entity IS NULL ... ) ==> the result is sent in DATA_STEP3 Table
    - Step 4 : run properly the mapping : source : DATA_STEP3 (the clean and verified data !) with cross reference Tables and send data into Essbase - NORMALY, there is not rejected record !
    My main problem is : what is the right IKM to send data into the DATA_STEP1, or DATA_STEP3 Table, which are Oracle Table in my ODISTAGING Schema ! I thy with IKM Oracle Incremental Update but I get error, and actually I don't need an update (which is time consumming), I just need an INSERT !
    I'm just lookiing for an 'IKM SQL to Oracle" ....
    regards
    xavier

    Thanks john : very speed !
    I understood better now which IKM is useful.
    I found other information about the error followup with ODI : http://blogs.oracle.com/dataintegration/2009/10/did_you_know_that_odi_generate.html
    and I decided to activate Integrity Constorl in ODI :
    I load :
    - data.txt in ODITEMP.T_DATA
    - transco_account.csv in ODITEMP.T_TRANSCO_ACCOUNT
    - transco_entity.csv in ODITEMP.T_TRANSCO_ENTITY
    - and so on ...
    - Moreover I create integrity constraints between T_DATA and T_TRANSCO_ACCOUNT and T_TRANSCO_ENTITY ... so I expected that ODI will raise for me in E$_DATA (the error table) the bad records !
    However I have one issue when loading data.txt into T_DATA because I have no ID or Primary Key ... I read in a training book that I could use a SEQUENCE ... I try but unsuccessful ... :-(
    Is there another simple way to create a Primary Key automaticaly (T_DATA is in an oracle Schema of course) ?thanks in advance

  • UNIX sed commands to clean up data before loading into BI

    Hi all,
    we are trying to load the data into BI from text files.. This data needs to be cleaned before it can be loaded into BI and this has to be done using UNIX sed commands!!
    We basically need to do a data cleansing act, like removing unwanted characters (tab values and newline values as part of the characteristic values). How can it be done using unix sed commands??
    your help is very much appreciated.
    Regards

    Hi all,
    we are trying to load the data into BI from text files.. This data needs to be cleaned before it can be loaded into BI and this has to be done using UNIX sed commands!!
    We basically need to do a data cleansing act, like removing unwanted characters (tab values and newline values as part of the characteristic values). How can it be done using unix sed commands??
    your help is very much appreciated.
    Regards

  • Depersonalising Data Before loading into DB

    Hi Guys,
    I need some help on de-Personalizing customer data before loading it into the database using SSIS.
    So one all the transformation done and finally want to load the data into respective tables , we need to de-personalize it.
    Also, how it will handle datatype of the table for each columns need to be de-personalized ?
    Later on we have to again de-cript once its tested by the testers. 
    Anky

    Hi Raj
    We have to  encrypt the data before loading the data into the table.
    As we are not encrypting the client ID that can be used to join with other tables for testing purpose but tester won’t able to see the other Client Personal Data
    like account number, address and DOB etc .
    we have to decrypt the data back once testing is done.
    Anky

  • Best practices for importing archived/new photos into LR?

    Hi All,
    New to LR 2.7.  After countless tutorials and online chats, I am still confused as to how LR manages image files.  It's like HS algebra all over again!
    Is the LR designed and intended for managing all of your photos or just those that need editing?
    Do you import all photos for one giant library?
    Is it best to save photos on external drive first then import to LR?
    For photos that are saved on an external drive, is it still necessary to copy them to a folder on your computer before importing or can you work directly from the drive? Is there a risk here?
    If you are importing directly from your memory card into LR, do you first need to save them to the external drive to avoid losing the original images?
    Looking for simple step by step routine to follow for archived photos and for new photos
    Thank you for your help!!
    AJ

    AJ Allen wrote:
    Is the LR designed and intended for managing all of your photos or just those that need editing?
    All of them.
    Do you import all photos for one giant library?
    I do.
    Is it best to save photos on external drive first then import to LR?
    For photos that are saved on an external drive, is it still necessary to copy them to a folder on your computer before importing or can you work directly from the drive? Is there a risk here?
    If you are importing directly from your memory card into LR, do you first need to save them to the external drive to avoid losing the original images?
    You can store the images where you want, internal, external or network, and you can import directly from a card to that location.  LR won't delete the images off the card, you have to do that separately.

  • Best Practice when deploying a single mdb into a cluster

    At a high level, we are converting all of our components to Weblogic processes that use Stateless Session Beans and Message Driven Beans. All of the weblogic processes will be clustered, and all of the topic queues will be distributed (Uniform Distributed Topics / Queues).
              We have one component that is a single MDB reading from a single queue on 1 machine. It is a requirement that the JMS messages on that queue be processed in order, and the processing of messages frequently requires that the same row in the DB be updated. Does anyone have any thoughts on the best design for that in our clustered environment?
              One possible solution we have come up with (not working):
              Possible Solution 1: Use a distributed topic and enforce a single client via client-id on the connection factory, causing a single consumer.
              1.Deploy a uniform-distributed Topic to the cluster.
              2.Create a connection factory with a client-id.
              3.Deploy a single FooMDB to the cluster.
              Problem with Solution 1: WL allows multiple consumers on Topic with same client-id
              1.Start (2) servers in cluster
              2.FooMDB running on Server_A connects to Topic
              3.FooMDB running on Server_B fails with unique id exception (yeah).
              4.Send messages - Messages are processed only once by FooMDB on Server_A (yeah).
              5.Stop Server_A.
              6.FooMDB running on Server_B connects automatically to Topic.
              7.Send messages - Messages are processed by FooMDB on Server_B (yeah).
              8.Start Server_A
              9.FooMDB successfully connects to Topic, even though FooMDB on Server_B is already connected (bad). Is this a WL bug or our config bug??
              10.Send messages - Messages are processed by both FooMDB on Server_A and Server_B (bad). Is this a WL bug or our config bug??
              Conclusion: Does anyone have any thoughts on the best design for that in our clustered environment? and if the above solution is doable, what mistake might we have made?
              Thank you in advance for your help!
              kb

    Thanks for the helpful info Tom.
              Kevin - It seems that for both the MDB, and the JMS provider, there are (manual or scripted) actions to be taken during any failure event + failure probes possibly required to launch these actions...?
              In the case of the JMS provider, the JMS destination needs to be migrated in the event of managed-server or host failure; if this host is the one that also runs the Admin server then the Admin server also needs to be restarted on a new host too, in order that it can become available to receive the migration instructions and thus update the config of the managed server which is to be newly targetted to serve the JMS destination.
              In the case of the MDB, a deployment action of some sort would need to take place on another managed-server, in the event of a failure of the managed server or the host, where the original MDB had been initally deployed.
              The JMS Destination migration actions can be totally avoided by the use of another JMS implementation which has a design philosophy of "failover" built into it (for example, Tibco EMS has totally automatic JMS failover features) and could be accessed gracefully by using Weblogic foreign JMS. The sinlge MDB deployed on one of the Weblogic managed servers in the cluster would still need some kind of (possibly scripted) redeployment action, and on top of this, there would need to be some kind of health check process to establish if this re-deployment action was actually required to be launched. It is possible that the logic and actions required just to establish the true functional health of this MDB could themsevles be as difficult as the original design requirement :-)
              All of this suggests that for the given requirement; the BEA environment is not well suited; and if no other environment or JMS provider is available at your site, then a manipulation of process itself may be required to enable it to be handled in a highly-available way which can be gracefully administered in a Weblogic cluster.
              We have not discussed the message payload design and the reasons that message order must be respected - by changing the message payload design and possibly adding additional data, this requirement "can", "in certain circumstances", be avoided.
              If you can't do that, I suggest you buy a 2 node Sun Cluster with shared HA storage and use this to monitor a simple JMS client java program that periodically checks for items on the Queue. The Tibco EMS servers could also be configured on this platform and give totally automatic failover protection for both process and host failure scenarios. With the spare money we can go to the pub.
              P.S. I don't work for Tibco or Sun and am a BIG Weblogic fan :-)

  • Do We Need to Validate Data Before Loading Into Planning?

    We are debating between whether to load data from GL to Planning using ODI or FDM. If we need some form of validity check on the data, we will have to use FDM, otherwise I believe ODI is good enough.
    My question is, for financials planning, what determines whether we need validity checks or not? How do we decide that?

    FDM helps in validation for data load audit options but validation is as easy as doing a comparison to totals by GL accounts from source and planning. You should be able to use ODI, FDM or load rules to load data into Hyperion and complete validation outside using any of reporting options.

  • Best practices for importing .png screen shots into FrameMaker 9?

    I'm new to importing .png screen shots into FrameMaker, and also new to FrameMaker 9. To compound things, I have engineers giving me screen shots taken on Linux machines, using a client like NX to get to Linux. My screen shots are coming in fuzzy and instead of trying to troubleshoot what's going wrong here, I was wondering if someone could quickly tell me what normally works with FrameMaker 9 and .png on a Windows system so I can troubleshoot from there.
    That is, let's say I'm capturing a screen shot on a Windows system and I use some tool (Paint Shop Pro X, SnagIt, Alt-PrintScreen, or whatever).  I save the screen shot as .png and then import by reference into FrameMaker 9.
    What dpi do I use in the capturing program?
    What dpi do I use when I import by reference into FM?
    What if that makes the screen shot too large/too small: Is it better to use the FrameMaker Graphics > Scale solution, to resize it in my capture program, or to retake the screen shot?

    Bringing screenshots into Frame documents has four major considerations:
    how to perform the original screen capture
    how to post-process it in a graphics editor
    what graphical object model (and file format) to store it in
    making sure it isn't damaged by your workflow
    1. Screen Cap, typically of dialog boxes ...
    ... used to be simple; isn't anymore.
    Dialogs used to have identical pixel dimensions on all user screens. You hit Alt[PrntScrn] in Windows and you got a one pixel per pixel 24-bit color image on the clipboard.
    More recent operating systems now have much more scalability in the GUI. So either capture at what the default user sees, or at the optimal presentation enlargement, which might be precisely 2x the default display.
    2. Post
    Before you even start this step, you need to know exactly how you will be presenting the final document out of Frame:
    B&W? Color?
    What dimensional size on the page?
    If hardcopy, what are the optimal parameters of graphical images for your print process? If web or PDF, various other considerations arise. I'll presume print.
    In our workflow, the print engine is 600 dpi bitmap. We normally present images at one column width, about 3.5in. Our PDF workflow passes 600 bitmap untouched, but resamples gray and color to 200 dpi. I tend to use 600 dithered bitmap, but always test if there's any doubt.
    Chances are the screencap bitmap, at your native printing res, is too small. There is no "dpi" associated with clipboard images. Once pasted into an image editor, the editor should provide the ability to declare the dpi (Photoshop does, both at paste, and via "resize without rescale"). If your image editor doesn't, fire it. If you can't see and control dpi, you have no real control over this workflow.
    Plan to save the final image in a format that at least supports linear dimensions (like EPS) if not explicit dpi (TIFF). Save out the image at the exact size you intend to use at import in Frame. Do you know how Frame scales objects? I don't. So do your scaling where you have control over how.
    Play with the defined "dpi" until the linear dimensions match your planned import frame. If that dpi passes through your workflow unmolested, you may not need to perform any resizing.
    If you need to convert from 24-bit color to bitmap (at your printing resolution), I'd suggest using error diffused dithering converting to your target res, as this tends to preserve the hard edges in dialog boxes and rasterized text.
    If you need to re-scale, I'd suggest scaling up by an integer multiple, using "nearest neighbor", to the nearest size just higher than your target dpi. Then rescale bi-linear down to your target size.
    3. File Format
    The main consideration here is compression, followed by color depth and size encoding.
    Screenshots tend to have expanses of flat colors, and hard edges. These objects compress reasonably well using repeat-count compressions (like ZIP or RLE). They tend to get damaged by curve-matching compression, like JPEG, because they typically contain no curves through color space. Hard edges get fuzzy or pick up ringing artifacts.
    So don't use JPEG (and what compression will your workflow use?). I tend to use TIFF(ZIP) for screenshots.
    Avoid indexed color (4-bit, 8-bit or 15/16-bit) color.
    Don't use GIF. It has no size encoding. The usual presumption is that GIF is 72 dpi, which makes importing it a nuisance, and at least a minor math exercise. Plus, it's indexed color, and may scale poorly downstream.
    Experiment. See what looks optimal in the final product.
    4. Workflow
    As you can see throughout the above, all your careful planning can be for nought it your save-as-PDF, distiller or print shop process resamples everything down to 90 dpi. You have to know what's happening, and how to optimize for it, or you probably won't like the result.
    We once had graphics showing up at 10 (yes ten) dpi in print.
    Turned out the print engine could handle JPEG 5 but not JPEG 2000.
    We had to hastily back-port images until a software upgrade fixed it.

  • Best practices for massive offline media import into LR 3.3....???

    I have hundreds of DVDs with thousands of images on them. I've imported a few disks just to test the program/workflow.
    Once a disk has been imported, there is no longer any reference to it. The thumbnails all have a filename and path starting with G:\ (my BD drive) but the disk's ID data does not exist anywhere that I've been able to find.
    I have a program called, "Advanced Disk Catalog" which will read all the folders and subfolders and files from a disk, as well as the disk ID just by inserting the disk and clicking a single button. Unfortunately, however, it does not create thumbnails.
    Before I begin importing large numbers of disks and manually entering the disk ID for every one (in keywords and sublocation fields of the metadata), I'm wondering how others have handled this. Seems a waste of my time to have to modify the metadata for every single disk when, obviously, it can be read by software. Will LR do this?
    Or...???
    TIA,

    After some messing about, this is the solution that works for me:
    From Library, select "Import..."
    Insert DVD containing images.
    When it appears in the Source column, select it and check "Include Subfolders."
    Without waiting for it to create thumbnails, select "Import."
    Let LR create all the thumbnails. (Get up, stretch your legs, get coffee, make a phonecall, write checks, etc...)
    Use ctrl A to select all images.
    In Metadata window, type the Disk ID in the Sublocation field. Press Enter
    "Set Metadata on Multiple Photos" will appear. Click on "Apply to Selected."
    You will now have a way to back track to the disk where any image resides.
    I find it easiest to add keywords immediately after changing the sublocation.
    Hope you find this useful! 

  • Best practice for Java stand alone upgrade into maint org area?

    I have a Java standalone running EP 6.0 on SAP netweaver 04 at SP 20.
    I want to upgrade to SAP Netweaver 2004s.  I think the equivalent of SPS 20 of NW 04 in NW 2004s is SPS 11 (ie all the SCAs are named *11...).
    NW 2004s SPS 11 download requires use of maintenance_organizer.
    Does this mean I should change the SMSY definition of my java solution to reflect SAP Netweaver 2004s now, rather than after I actually upgrade, even though the upgrade may be several weeks away?
    Is changing the SMSY def the only way to get the SPS 11 media? 
    I am currently in the process of collecting all of the media required for a sucessful upgrade.
    Thx
    Ken Chamberlain
    University of Toronto

    kevjava wrote: Some things that I think would be useful:
    Suggestions reordered to suit my reply..
    kevjava wrote: 2. Line numbering, and/or a line counter so you can see how much scrolling you're going to be imposing on the forum readers.
    Good idea, and since the line count is only a handful of lines of code to implement, I took that option. See the [line count|http://pscode.org/stbc/help.html#linecount] section of the (new) [STBC Help|http://pscode.org/stbc/help.html] page for more details. (Insert plaintiff whining about the arbitrary limits set - here).
    I considered adding line length checking, but the [Text Width Checker|http://pscode.org/twc/] ('sold separately') already has that covered, and I would prefer to keep this tool more specific to compilation, which leads me to..
    kevjava wrote: 1. A button to run the code, to see that it demonstrates the problem that you wish for the forum to solve...
    Interesting idea, but I think that is better suited to a more full blown (but still relatively simple) GUId compiler. I am not fully decided that running a class is unsuited to STBC, but I am more likely to implement a clickable list of compilation errors, than a 'run' button.
    On the other hand I am thinking the clickable error list is also better suited to an altogether more abled compiler, so don't hold your breath to see either in the STBC.
    You might note I have not bothered to update the screenshots to show the line count label. That is because I am still considering error lists and running code, and open to further suggestion (not because I am just slack!). If the screenshots update to include the line count but nothing else, take that as a sign. ;-)
    Thanks for your ideas. The line count alone is worth a few Dukes.

  • Best practice for load balancing on SA540

    Are there some 'best practice' guide to configure out load balancing on SA540 .?
    I've got 2 ADSL lines and would like device to auto manage outgoing traffic .. Any idea ?
    Regards

    Hi,
    SA500 today implements flow based round robin load balancing scheme.
    In the case of two WAN link (over ADSL), by default, the traffic should be "roughly" equally distributed.
    So in general, users should have no need to configure anything further for load balancing.
    The SA500 also supports protocol binding (~PBR) over WAN links. This mechanism offers more control on how traffic can flow.
    For example, if you have 1 ADSL with higher throughput than the other ADSL link offers, you can consider to bind bandwidth-hungry app on the WAN link connecting to the higher ADSL link and the less bandwidth-hungary app on the other one. The other traffic can continue to do round robin.  This way you won't saturate the low bandwidth link and give users better application experiences.
    Regards,
    Richard

  • FDM file format best practice

    All, We are beginning to implement an Oracle GL and I have been asked to provide input as to the file format provided from the ledger to process through FDM (I know, processing directly into HFM is out..at least for now..).
    Is there a "Best Practice" for file formats to load through FDM into HFM. I'm really looking for efficiency (fastest to load, easiest to maintain, etc..)
    Yes, we will have to use maps in FDM, so that is part of the consideration.
    Questions: Fix or delimited, concatenate fields or not, security, minimize the use of scripts, Is it better to have the GL consolidate etc...?
    Thoughts appreciated
    Edited by: Wtrdev on Mar 14, 2013 10:02 AM

    If possible a Comma or Semi-Colon Delimited File would be easy to maintain and easy to load.
    The less use of scripting on the file, the better import performance.

  • Best Practice Attaching USB drive to VM

    Since USB passthru is no longer supported in OVM, what is the best practice and or recommendation for loading large amounts of data to a VM from an external USB drive? 

    thxceej, welcome to the discussion area!
    Suggest that you connect the drive directly to your MBP, then open
    Applications > Utilities > Disk Utility.
    Click to select your drive on the left and then click Repair Disk at the lower right of the window and let the process run. It may take 30-60 minutes depending on how much data you have on the disk.
    When the disk has been repaired, power down the complete network...all devices. Connect the drive back to the USB port on the Time Capsule. Then start the modem first, then your router,then devices one at a time until everything is powered back up.
    See if that helps.

  • BEST PRACTICE FOR THE REPLACEMENT OF REPORTS CLUSTER

    Hi,
    i've read the noter reports_gueide_to_changed_functionality on OTN.
    On Page 5 ist stated that reports cluster is deprecated.
    Snippet:
    Oracle Application Server High Availability provides the industry’s most
    reliable, resilient, and fault-tolerant application server platform. Oracle
    Reports’ integration with OracleAS High Availability makes sure that your
    enterprise-reporting environment is extremely reliable and fault-tolerant.
    Since using OracleAS High Availability provides a centralized clustering
    mechanism and several cutting-edge features, Oracle Reports clustering is now
    deprecated.
    Please can anyone tell me, what is the best practice to replace reports cluster.
    It's really annoying that the clustering technology is changing in every version of reports!!!
    martin

    hello,
    in reality, reports server "clusters" was more a load balancing solution that a clustering (no shared queue or cache). since it is desirable to have one load-balancing/HA approach for the application server, reports server clustering is deprecated in 10gR2.
    we understand that this frequent change can cause some level of frustration, but it is our strong believe that unifying the HA "attack plan" for all of the app server components will utimatly benefit custoemrs in simpifying their topologies.
    the current best practice is to deploy LBRs (load-balancing routers) with sticky-routing capabilites to distribute requests across middletier nodes in an app-server cluster.
    several custoemrs in high-end environments have already used this kind of configuration to ensure optimal HA for their system.
    thanks,
    philipp

  • SAP Best Practice!!!

    Hi All
    Please let me know if anybody of you have loaded best practice? i want to load Best practice for a particular country.
    require steps and things needs to be taken care.
    Regrds
    Yogesh

    Link: http://help.sap.com/
    Path: SAP Best Practices --> Baseline Packages --> Based on SAP ECC 5.00 --> Select Country: for eg, Localized for India --> Technical Information --> Building Blocks --> Select Country for eg, India --> List of Basic Configuration & scenarios will be listed.
    Select the required basic configuration / scenario. These will consist of overview, configuration guide, business process, master data &so on ...........
    Regards,
    Rajesh Banka
    Reward suitable points.
    How to give points: Mark your thread as a question while creating it. In the answers you get, you can assign the points by clicking on the stars to the left. You also get a point yourself for rewarding (one per thread).

Maybe you are looking for

  • How to see List of action type & action code in Form View..

    Hi~ Guys ... please help me ~ In Design Tab of workspace... I have worked  like this.. step 1) I made a Form Wiew using a Add Input Form from BI Query Icon , step 2) submit button was made in the configure element automatically ,, step 3) click the s

  • IPHOTO NO LONGER LOADS AFTER INSTALLING 10.4.10 on G4

    After installing 10.4.10 on a G4 desktop, iPhoto no longer loads. When I try to rebuild library, the applications quits at the last instant. Any suggestions on how to fix or how to uninstall and get back to earlier system?

  • Maxtor One Touch II Won't Mount

    I'm running 10.6.8 and my maxtor one touch II won't mount all of a sudden? I've tried fire wire 400 and fire 400to800, restarting etc. The drive won't show up? The driver update on the Maxtor/Seagate site opens up as code in a text edit file, it does

  • Problem with Elster Kommunikation

    Hello all, we have install SAP XI on NetWeaver2004s with Elster. To check the settings we execute the following report RPUTX7D0. Elster is working in the DEV and QAS system. In our productive system elster works the last time too. But since two days

  • Problems installing Directory Server 5.0

    I am currently trying to install DS5.0 and get through the whole process until it says updating system information when I get the message slapd.exe The instruction at 0x004fdaff referenced memory at 0x70616c2d. The memory could not be read Has anybod