Strategy for big data

Dear experts,
Currently i'm facing Big Data problem. We have an about 1TB transaction record for Per Month.
Now I'm trying to create Data Marts for that. And Install Obiee. What is the Strategy And Steps?
Please Advice...
BR,
Eba

Denis,
In this case you can do it two ways.
1. Proxies - You will have to develop a custom report which will collect all the data that needs to be sent and call the PROXY will the collected as input.
2. IDOCs - If you are dealing with standard IDOCS, this is easier. You can activate the configuration to send the IDOCS for contracts for all the operations that you have mentioned. Do the required outbound configuration in WE20 to mention the target system as XI.
I am not sure why are you even thinking of scheduling a BPM in XI that will invoke the RFC. SAP as such has got the scheduling capabilities. I would rather suggest you to use that.
Regards,
Ravi

Similar Messages

  • GoldenGate for Big Data 12c for Win x64?

    I was looking for the GoldenGate for Big Data download for Win x64 and all I found on edelivery was Linux, Solaris, HP-UX and AIX platforms, but no Windows at all (see the screenshot below). I wonder if it's been released yet? Or, is it just an unfortunate omission?
    Thanks
    Andy

    Thanks, for your reply, Karan!
    I tried following your advice, but bumped into yet another similar problem. I've installed OGG 12c and now I can't seem to be able to find the matching version of the GG Application Adapters for JMS and Flat File for the Win x64 platform. The latest version of Application Adapter available on edelivery is 11.1.1.0.0 which means I need to downgrade OGG to the same version. No big deal but I wanted to make sure I'm not missing anything.
    I wonder if anybody has any idea as to whether Application Adapters 12c for JMS and Flat File is available for the Win x64 platform, and if so, where can I download it from?
    Thanks
    Andy

  • Working with R packages for Big Data

    Hi ,
    I wonder which R package from it big data an parallel processing family are relevent to work with in ML Studio ?
    It depends on if ML Studio uses Map Reduce during R script ? If yes , RHadoop package seems not useful .
    If using snowfall package for parallel processing will help for high volume datasets . If it will exploit several CPU ?
    Thanks in advance

    Currently, the R scripts are executed on single VM. You can manually set up map-reduce pattern by splitting the data and having multiple Execute R Script modules in parallel in your experiment graph.
    -Roope

  • Strategy for managing data over multiple drives

    I have been looking at extending my hard drives and considering the very same options as The Hatter suggested in recent posts - ie Raptor vs Caviar SE16 2*750 vs Caviar RAID 2*750 .
    I received a deal on the initial HDD set with my MacPro , so i currently have 2* 250 HDD’s and i have just started to move my itunes and iphoto files plus other media files ( incl photoshop data, movies, documents, etc) onto the separate drive, to see what performance benefits i will get. This is hopefully as a prelude to going to a more ruthless split of files along the lines of a formal strategy - hence my question on what strategy i should have ?
    My dilemma is trying to find a clear explanation of where to start looking for the practical way to actually set up the OSX and apps on one boot disc/partition and the media/documents on another drive(s); and what to do with the other stuff that doesn't actually fit into either category - user/home/library/application support folders, presets and other application support files.
    I have read posts with people saying not to move user and application support data/library files from the boot drive - in which case there is a lot of file data still likely to reside in the boot drive even after removing documents, and all media files ?
    I am paranoid about not having a clear idea of the right strategy before starting the whole process
    its more a strategy question than hardware, but i have not been able to really get this answered from the posts that i have searched.
    cheers
    graham

    I personally wouldn't bother about going with Raptors. They are disproportionally expensive and do not perform any better than much larger drives in the 750GB/1TB sizes. With these large drives I cannot see any compelling reason to go with a Raptor. Compared to 500GB and smaller drives sure… but not with larger drives.
    So presuming you're going with 2 x Western Digital RE2 750GB drives then you need to decide if you're using RAID or not. If you are then you'll have a 1.5TB volume to use which require no further effort.

  • Best strategy for deleting data

    For example: I should delete a lot of data from big databases. There are a lot of indexes and I know about the problem with the Bayer trees which must reorganized by the DMBS at every delete. If there a faster way to delete a big pile of data from the tables?

    Hi,
    go through the asktom post, its clearly explained.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5033906925164]
    Regards,
    Vijayaraghavan K

  • Big data and database administration

    Hi,
    I am working as a Oracle DBA. I would like to know what is dba role for Big data & Nosql.
    Is it really useful for learning bigdata.
    Thanks,

    . Are
    there any relationship between these two fields?You are comparing cheese with chalk.
    how
    I can learn more about the data wherehousing?Start with Oracle doc,
    Oracle® Database Data Warehousing Guide
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/toc.htm

  • Big Data example

    Hi all, 
    I hear the term big data for quite some time now...
    Whenever I look on the web I only find infrastructure explanation...
    What does it mean in terms of T-Sql coding? structural storage on sql server (are tables used?)? Is there a "Hello World" example for big data.
    Sorry if my question is a bit wierd but that's how I've always started learning any new programming language or API.
    Thanks in advance, 
    Dror

    Hi,
    what is big data?
    from wiki
    Big data is an all-encompassing term for any collection of dataset so large and complex that it becomes difficult to process using on-hand data management tools or traditional data processing applications.
    http://en.wikipedia.org/wiki/Big_data
    What is Hadoop?
    Hadoop is designed to efficiently process large volumes of information by connecting many commodity computers together to work in parallel
    https://developer.yahoo.com/hadoop/tutorial/
    Hadoop distribution for Microsoft
    http://hortonworks.com/partner/microsoft/
    Microsoft PDW
    http://gnanadurai.blogspot.in/

  • Big Data Lite ver 3.0 cannot launch movieplex demo

    I downloaded the big data lite ver 3.0 virtual machine. Successfully imported -->  .ova and login using oracle into linux KDE.
    I clicked on firefox "Start Here". Then clicked "http://localhost:7001/movieplex/index.jsp" . Immediately I got the firefox error
    Firefox cant' establish a connection to the server at localhost:7001.
    I googled and not much information was found. I am not sure if there's any java error in setDomain.Env.sh where some posts indicated. I followed and modified but still couldn't get the website up. I am not into the labs exercise yet. Just trying to attempt to run demo and already hit into an error.
    Can anyone help please?

    Did anyone get this working for Big Data Lite VM?
    I was getting the Jackson NoSuchMethod... error, but I was able (I think) to get aroiund it by downloading newer versions of the Jackson jars.
    After I click Sign In all I get is the text: oracle.kv.impl.api.KVStoreImpl@<a_value_which_changes_every_clikc>
    Should I just give up an load 2.5? Is it more stable?
    How deep do the errors go? This is the third thing I have had to fix and I am wasting valuable time spinning my wheels.
    Thanks,
    matt

  • What is the difference between Data Warehousing and Big Data?

    What is the difference between Big Data and
    Data Warehousing? are they the same? similar? if no, then when to use each of them?
    Any link to a paper that describes the difference?!

    Big Data is a term applied to data sets whose size is beyond the ability of commonly used tools to capture, manage and process the data within a tolerable elapsed time. But Data-warehouse is a collection of data marts representing historical data from different
    operations in the company.
    It means Big Data is collection of large data in a particular manner but Data-warehouse collect data from different department of a organization. However Data-warehouse require efficient managing technique. Conceptually these are same only at one factor that
    they collect large amount of information
    For Big Data - You can start researching on HDInsight and for Datawarehouse check MSBI 
    Sandip Pani http://SQLCommitted.com

  • I am using the big date calendar template and when I submit it to apple for printing I lose the name of two months. These names are not text boxes. I see the names when I send it in but something happens during the transmission to apple. It was suggested

    I am using the big date calendar template in iPhoto. I am on Lion 10.7.2, macbook air. The names of the months are on each calendar page but something happens when I send the data to Apple. The names are part of the template. They are not text boxes. I lose two names on the calendar after it is sent to Apple. Apple suggested I make a pdf file of my calendar before sending it in and check to make sure every name shows. I did this with a calendar I just sent in. The calendar was correct. All names of the months were showing. After sending the data two month names disappeard because when it arrived by mail, it was incorrect. Apple looked at my calendar via a pdf file and it was incorrect.  This is second time this has happened. I called Apple and they had me delete several folders in the Library folder, some preferences and do a complete reinstall of iPhoto.  I have not yet remade the defective calendar. I am wondering if anyone else has had this problem?
    kathy

    Control-click on the background of the view all pages window and select "Preview Calendar" from the contextual menu.
    You can also save the pdf as a file to compare to the printed calendar.  If the two names are visible in the pdf file then the printed copy should show them.  Contact Apple for a refund.  Apple Print Products - Apple Store (U.S.)

  • Data conversion strategy for new SOB

    Dear Viewers
    on 11.5.10
    We are creating a new SOB with a change in currency from Feb-11 as this is the requirement
    For the same, we need to do data conversion.
    I have a confusion for Purchase Orders and Sales Orders
    Purchase Orders:
    Open purchase orders will be converted, means the unfulfilled PO`s i.e the ones not received and are fully open.
    The PO`s which have been recieved but not delivered, Requested the users to clear the intransit receipts.
    The PO's which are partially received, what to be done for them?
    If a PO is fully received and Delivered will not me converted to the new SOB as its not an open PO
    but If invoice comes after Feb-11, than how the matching will be done?
    What if a return has to be made moving forward in FEB-11 under new SOB
    Sales Orders:
    Open sales orders will be converted, that is the ones that have been entered and not yet booked.
    Users have been requested to clear off the Sales order lines which are already pick confirmed but not yet shipped, hence they will be shipped and interfaced to AR
    For the Sales orders that have been booked, those lines that are not yet processed further will also be converted.
    Now what if a receipt comes after feb 11, how to handle this as the sales order wiould not have been converted?
    Please give your advise on the data migration strategy for PO`s and SO's.
    Please do add any point that may have been missed by me
    Appreciate your help
    Thanks
    Emm

    Hi David,
    for master data conversion you can use LSMW and the RE-FX BAPIs. (please refer to SAP note  [782947|https://service.sap.com/sap/support/notes/782947] ).
    Regards, Franz

  • Best strategy to upload a big data file and then to insert in db the content

    Hi,
    Here's our requirement. We have a web business application developed on JSF 1.2, JE66, WebLogic for application server, and Oracle for the data back end tier. We need to upload big data files (80 to 100 Mb) from a web page, and persist the content in database tables.
    What's the best way in terms of performance to implement this use case ? Once the file is uploaded on server a command button is available on the web page  to trigger a JSF controller action in order to save data in database.
    Actually we plan to keep in memory the content of the http request, and call insert line on each line of the file. But i think it's bad and not scalable.
    Is is better to write the file on server's disk and then use multiple threads to send the lines to the database ? How to use multi threading in a JSF managed bean ?
    Thanks

    In addition, LoadFromFile is overloaded to handle both BLOB and CLOB:
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       BLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT
    <BR>
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       CLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT

  • Best practices for administering Oracle Big Data Appliance

    -        Best practices as part of administration of Oracle Big Data Infrastructure
    -        How do we lock down max space usage per project
    Eg: Project team A can have a max limit of 10 TB space allocated
    -        Restricting roles, access ( Read, Write), place holder for common shared artifacts
    -        Template/procedure for code migration across dev,qa and prod environments etc

    Your data is bigger than I run, but what I have done in the past is to restrict their accounts to a separate datafile and limit its size to the max that I want for them to use: create objects restricted to accommodate the location.

  • What is the best big data solution for interactive queries of rows with up?

    0 down vote favorite
    We have a simple table such as follows:
    | Name | Attribute1 | Attribute2 | Attribute3 | ... | Attribute200 |
    | Name1 | Value1 | Value2 | null | ... | Value3 |
    | Name2 | null | Value4 | null | ... | Value5 |
    | Name3 | Value6 | null | Value7 | ... | null |
    | ... |
    But there could be up to hundreds of millions of rows/names. The data will be populated every hour or so.
    The goal is to get results for interactive queries on the data within a couple of seconds.
    Most queries look like:
    select count(*) from table
    where Attribute1 = Value1 and Attribute3 = Value3 and Attribute113 = Value113;
    The where clause contains arbitrary number of attribute name-value pairs.
    I'm new in big data and wondering what the best option is in terms of data store (MySQL, HBase, Cassandra, etc) and processing engine (Hadoop, Drill, Storm, etc) for interactive queries like above.

    Hi,
    As always, the correct answer is "it depends".
    - Will there be more reads (queries) or writes (INSERTs)?
    - Will there be any UPDATEs?
    - Does the use case require any of the ACID guarantees, or would "eventual consistency" be fine?
    At first glance, Hadoop (HDFS + MapReduce) doesn't look like a viable option, since you require "interactive queries". Also, if you require any level of ACID guarantees or UPDATE capabilities the best (and arguably only) solution is a RDBMS. Also, keep in mind that Millions of rows is pocket change for modern RDBMSs on average hardware.
    On the other hand, if there'll be a lot more queries than inserts, VERY few or no updates at all, and eventual consistency will not be a problem, I'd probably recommend you to test a Key-Value store (such as Oracle NoSQL Database). The idea would be to use (AttributeX,ValueY) as the Key, and a Sorted List of Names that have ValueY for their AttributeX. This way you only do as many reads as attributes you have in the WHERE clause, and then compute the intersection (very easy and fast with sorted lists).
    Also, I'd do this computation manually. SQL may be comfortable, but I don't think It's Big Data ready yet (unless you chose the RDBMS way, of course).
    I hope it helped,
    Joan
    Edited by: JPuig on Apr 23, 2013 1:45 AM

  • Node strategy for managing many data points

    Earlier with the older JavaFX there were issues where rendering many nodes could really slow the scenegraph down. Is there now some strategy for how to let the scenegraph efficiently render only the nodes for data that actually intersects the visible screen?
    In the scenario I'm thinking of nodes decorate data and are more of a temporary thing, so they need to be reusable or else created and disposed of quickly when visualizing the data points.

    >
    Is there now some strategy for how to let the scenegraph efficiently render only the nodes for data that actually intersects the visible screen?
    >
    [url http://en.wikipedia.org/wiki/Hidden_surface_determination]A variety of strategies have been in existence for a while now, JavaFX just hasn't gotten all of them implemented yet. It looks like JavaFX already uses dirty rectangles. I don't know if or how much culling has been implemented, but I'm sure it will be there sooner or later.

Maybe you are looking for

  • Select query running long time

    Hi, DB version : 10g platform : sunos My select sql query running long time (more than 20hrs) .Still running . Is there any way to find sql query completion time approximately. (Pending time) Also is there any possibilities to increase the speed of s

  • What monitors are compatible with the Mac mini?

    What monitors are compatible with the Mac mini?

  • Reinstall Printer

    I inadvertently deleted my two HP printers from the iMac preferences   - Printers and Scanners screen.  So they are no longer in the select printer option.  How do I undo this error?

  • Using DW CC, refusing to open files "unable to launch"

    hi folks i am trying out the trial of DW CC on my laptop and have a strange error - when i open an ASP file - it launches DW and displays the file. BUT when i try open another ASP file - from the 'file' panel inside DW, i get this error: i used to ha

  • How to search for NOT something?

    I want to make a copy of every sub-folder on a particular drive EXCEPT those which include a specific word ("[ToNY]") in the title - should I use the finder or spotlight? how do I tell either one of them to give me a list of every sub-folder that doe