BIG Data with OBIEE

What is Big Data? How to handle that in OBIEE?
Please give me the steps how to implement that.
Thanks.

Hi,
Not sure what you want to achieve here... however.. what I read is that you would like to have a way to get unstructured data into structured so you can report on it using OBIEE. (Correct?).
Depending on the amount of data and the things you need to do to it before you can structure it you can create a map/reduce function and use Hadoop. There is a connector to push the hadoop results into the Oracle database. Then you could use OBIEE to report on the now structured data in the Oracle database.
As I do not know the details of what you like to achieve,..... this is a solution I would start looking into if I where you.
mark as helpful if answered

Similar Messages

  • Big data Implementation in sql server 2008 r2

    Can Some one please explain what is big data and implementation of big data or usage of big data with clear explanation, Screen shots will be more helpfull.
    Thanks in Advance.
    Tanks,
    G.Satish Reddy.

    Can Some one please explain what is big data and implementation of big data or usage of big data with clear explanation, Screen shots will be more helpfull.
    Unfortunately, the data is to big to fit into a screenshot. :-)
    Big data is one the most recent buzzwords in the computer industry. SQL 2008 R2 is not a good tool to work with big data.
    Big data is about analysing patters in big amounts of data, often semi-structured or unstructured. Places like Facebook, Google, Amazon spend a lot of time in analysing click logs to see which pages you went to and from, in that way they can predict your
    interest.
    Much of the Big Data analysis is performed with Hadoop, which is an open source offering that works on a non-relational database. Someone mentioned PDW as Microsoft's offering in the space, but Microsoft works with Hadoop as well, and they also marry
    them together.
    Anyway, if you relaly want to get a grip what this is all about, listen David DeWitt's excellent keynote from the PASS Summit 2011. You will have to register and become a member of PASS if you aren't already. PASS is free and it is a non-profit organisation
    for SQL users world-wide.
    http://www.sqlpass.org/summit/2011/live/livestreaming/livestreamingfriday.aspx
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Oracle Business Intelligence with big data

    Has anyone implemented OBIEE 10g utilizing a denormalized data model for a very large transactional data set? There is a potential to generate reports in the 100s of millions of rows, with the data warehouse storing fact tables that have ~1.5 billion rows with a data consumption rate of over 200GB per month.
    Does anyone have any best practices, tips, or advice to determine the feasibility of implementing OBIEE 10g for such a large volume? The data is transactional and there are no current requirements for aggregate data sets. Is it feasible to use OBIEE Answers to generate these types of reports? Thus far I've seen OBIEE Answers hanging/crashing on reports that are > 10MB in size. Any configuration tips or feedback would be appreciated.
    Thanks,
    John

    I think with Big Data environment you need not worry about caching, runtime aggregation and processing, if your configuration is right. The hardware along with distributed processing would take care of most of the load since Big Data database are designed to be in-memory and highly responsive.
    The thing you should consider that final output should not be too large. What I mean is, your request can process 100 million rows on database, but final output can be 1000 rows, since BI is all about summarization.
    If you have large data thrown to the Presentation Server, it could make presentation service unstable.

  • Can SQL Developer Data Modeler work with OBIEE?

    Can SQL Developer Data Modeler work with OBIEE? Can we export the data model from the Data Modeler and import it into OBIEE? Or export the OBIEE metadata to the Data Modeler for Data Model defining?

    no
    Philip

  • What is the best big data solution for interactive queries of rows with up?

    0 down vote favorite
    We have a simple table such as follows:
    | Name | Attribute1 | Attribute2 | Attribute3 | ... | Attribute200 |
    | Name1 | Value1 | Value2 | null | ... | Value3 |
    | Name2 | null | Value4 | null | ... | Value5 |
    | Name3 | Value6 | null | Value7 | ... | null |
    | ... |
    But there could be up to hundreds of millions of rows/names. The data will be populated every hour or so.
    The goal is to get results for interactive queries on the data within a couple of seconds.
    Most queries look like:
    select count(*) from table
    where Attribute1 = Value1 and Attribute3 = Value3 and Attribute113 = Value113;
    The where clause contains arbitrary number of attribute name-value pairs.
    I'm new in big data and wondering what the best option is in terms of data store (MySQL, HBase, Cassandra, etc) and processing engine (Hadoop, Drill, Storm, etc) for interactive queries like above.

    Hi,
    As always, the correct answer is "it depends".
    - Will there be more reads (queries) or writes (INSERTs)?
    - Will there be any UPDATEs?
    - Does the use case require any of the ACID guarantees, or would "eventual consistency" be fine?
    At first glance, Hadoop (HDFS + MapReduce) doesn't look like a viable option, since you require "interactive queries". Also, if you require any level of ACID guarantees or UPDATE capabilities the best (and arguably only) solution is a RDBMS. Also, keep in mind that Millions of rows is pocket change for modern RDBMSs on average hardware.
    On the other hand, if there'll be a lot more queries than inserts, VERY few or no updates at all, and eventual consistency will not be a problem, I'd probably recommend you to test a Key-Value store (such as Oracle NoSQL Database). The idea would be to use (AttributeX,ValueY) as the Key, and a Sorted List of Names that have ValueY for their AttributeX. This way you only do as many reads as attributes you have in the WHERE clause, and then compute the intersection (very easy and fast with sorted lists).
    Also, I'd do this computation manually. SQL may be comfortable, but I don't think It's Big Data ready yet (unless you chose the RDBMS way, of course).
    I hope it helped,
    Joan
    Edited by: JPuig on Apr 23, 2013 1:45 AM

  • Oracle Life science Data Hub integrated with OBIEE 11g

    Hi all,
    I have couple of questions from my end,
         1. How Oracle Life Science Data Hub can be  Integrated with OBIEE 11g.
         2. Whether Oracle LSH is a separate software ?  if it so kindly provide the link to download the LSH.
    Many thanks in advance.
    Regards,
    Murali

    Hi Murali,
    I do not know whether obiee can use Oracle LSH as a source or not but please refer below link. It says that
    1) Users can then launch the Oracle Business Intelligence Dashboard
    either through Oracle LSH or through a URL to see data visualizations.
    2)
    Oracle LSH Release 2.2 supports using OBIEE 10.1.3.4.1 for both programs and
    visualizations and OBIEE 11.1.1.5.0 for visualizations only.
    Please check with you administrator.
    Thanks,
    Amol
    (Please mark this answer, if you find correct)

  • Working with R packages for Big Data

    Hi ,
    I wonder which R package from it big data an parallel processing family are relevent to work with in ML Studio ?
    It depends on if ML Studio uses Map Reduce during R script ? If yes , RHadoop package seems not useful .
    If using snowfall package for parallel processing will help for high volume datasets . If it will exploit several CPU ?
    Thanks in advance

    Currently, the R scripts are executed on single VM. You can manually set up map-reduce pattern by splitting the data and having multiple Execute R Script modules in parallel in your experiment graph.
    -Roope

  • Is there any connectors between - OBIEE RPD & Big Data

    Is there any connector between - OBIEE RPD & Big Data? How we will get structured & unstructured data into OBIEE RPD to generate reports?

    Not sure what you want to achieve here... however.. what I read is that you would like to have a way to get unstructured data into structured so you can report on it using OBIEE. (Correct?).
    Depending on the amount of data and the things you need to do to it before you can structure it you can create a map/reduce function and use Hadoop. There is a connector to push the hadoop results into the Oracle database. Then you could use OBIEE to report on the now structured data in the Oracle database.
    As I do not know the details of what you like to achieve,..... this is a solution I would start looking into if I where you. :-)
    Regards,
    Johan Louwers

  • Connection with Hadoop/big data

    Hi,
    how can we connect the HADOOP with the ENDECA STUDIO. Any document please do let me know.
    thanks and regards
    Shashank Nikam

    Hi,
    As far as I know you will need to use Oracle Data Integrator (ODI) or Oracle Big Data connectors
    Some sites:
    Oracle Data Integrator Enterprise Edition 12c | Data Integration | Oracle
    http://docs.oracle.com/cd/E37231_01/doc.20/e36963/concepts.htm#BIGUG107
    http://www.oracle.com/us/products/database/big-data-connectors/overview/index.html

  • Strategy for big data

    Dear experts,
    Currently i'm facing Big Data problem. We have an about 1TB transaction record for Per Month.
    Now I'm trying to create Data Marts for that. And Install Obiee. What is the Strategy And Steps?
    Please Advice...
    BR,
    Eba

    Denis,
    In this case you can do it two ways.
    1. Proxies - You will have to develop a custom report which will collect all the data that needs to be sent and call the PROXY will the collected as input.
    2. IDOCs - If you are dealing with standard IDOCS, this is easier. You can activate the configuration to send the IDOCS for contracts for all the operations that you have mentioned. Do the required outbound configuration in WE20 to mention the target system as XI.
    I am not sure why are you even thinking of scheduling a BPM in XI that will invoke the RFC. SAP as such has got the scheduling capabilities. I would rather suggest you to use that.
    Regards,
    Ravi

  • I am using the big date calendar template and when I submit it to apple for printing I lose the name of two months. These names are not text boxes. I see the names when I send it in but something happens during the transmission to apple. It was suggested

    I am using the big date calendar template in iPhoto. I am on Lion 10.7.2, macbook air. The names of the months are on each calendar page but something happens when I send the data to Apple. The names are part of the template. They are not text boxes. I lose two names on the calendar after it is sent to Apple. Apple suggested I make a pdf file of my calendar before sending it in and check to make sure every name shows. I did this with a calendar I just sent in. The calendar was correct. All names of the months were showing. After sending the data two month names disappeard because when it arrived by mail, it was incorrect. Apple looked at my calendar via a pdf file and it was incorrect.  This is second time this has happened. I called Apple and they had me delete several folders in the Library folder, some preferences and do a complete reinstall of iPhoto.  I have not yet remade the defective calendar. I am wondering if anyone else has had this problem?
    kathy

    Control-click on the background of the view all pages window and select "Preview Calendar" from the contextual menu.
    You can also save the pdf as a file to compare to the printed calendar.  If the two names are visible in the pdf file then the printed copy should show them.  Contact Apple for a refund.  Apple Print Products - Apple Store (U.S.)

  • How can I  custom my date  in OBIEE.

    How can I custom my date in OBIEE.I want to see date in my user friendly format for Date Format MM/DD/YYYY to filter with
    leading zero(number) eg (01.21.2008) and not M/D/YYYY (1.21.2008) that i am seeing.I would appreciate it if you could give me a step by step process on how to do it in RPD.I know how to change the data format in my column propeties since i have more that 5000 columns that need changed.I am looking for a localised area
    that can take care of business.
    Thanks

    Edit the following parameters in OracleBI\web\config\localedefinitions.xml
    - Search for the locale that you want to customize by looking for the tag *<localeDefinition name="en">*
    - Customize the following tags as you require
    <property name="dateShortFormat">MM/DD/YYYY</property>
    <property name="dateLongFormat">dddd, MMMM dd, yyyy</property>
    - Restart the presentation server

  • Sql@loader-704  and ORA-12154: error messages when trying to load data with SQL Loader

    I have a data base with two tables that is used by Apex 4.2. One table has 800,000 records . The other has 7 million records
    The client recently upgraded from Apex 3.2 to Apex 4.2 . We exported/imported the data to the new location with no problems
    The source of the data is an old mainframe system; I needed to make changes to the source data and then load the tables.
    The first time I loaded the data i did it from a command line with SQL loader
    Now when I try to load the data I get this message:
    sql@loader-704 Internal error: ulconnect OCISERVERATTACH
    ORA-12154: tns:could not resolve the connect identifier specified
    I've searched for postings on these error message and they all seem to say that SQL Ldr can't find my TNSNAMES file.
    I am able to  connect and load data with SQL Developer; so SQL developer is able to find the TNSNAMES file
    However SQL Developer will not let me load a file this big
    I have also tried to load the file within Apex  (SQL Workshop/ Utilities) but again, the file is too big.
    So it seems like SQL Loader is the only option
    I did find one post online that said to set an environment variable with the path to the TNSNAMES file, but that didn't work..
    Not sure what else to try or where to look
    thanks

    Hi,
    You must have more than one tnsnames file or multiple installations of oracle. What i suggest you do (as I'm sure will be mentioned in ed's link that you were already pointed at) is the following (* i assume you are on windows?)
    open a command prompt
    set TNS_ADMIN=PATH_TO_DIRECTOT_THAT_CONTAINS_CORRECT_TNSNAMES_FILE (i.e. something like set TNS_ADMIN=c:\oracle\network\admin)
    This will tell oracle use the config files you find here and no others
    then try sqlldr user/pass@db (in the same dos window)
    see if that connects and let us know.
    Cheers,
    Harry
    http://dbaharrison.blogspot.com

  • Retrieving data from OBIEE server using ODBC

    Hello.
    I am having difficulty retrieving data from OBIEE 11g using the ODBC connection. My intention is to read OBIEE Server data via a SQL statement issued from an Oracle database using gateways.
    I have read and followed the material here:
    http://download.oracle.com/docs/cd/E14571_01/bi.1111/e16364/odbc_data_source.htm#CACBFCJA
    I have also tried this from Excel 2007, but have not succeeded:
    http://gerardnico.com/wiki/dat/obiee/import_data_from_obiee_to_excel
    I have setup a gateway in Oracle and a database link. Iknow it is connecting because when I change the password to something invalid, I get a message about not being able to connect due to username/password.
    I have taken a query from Answers (the advanced tab) and tried running it from Sql Developer. The nsqserver.log file ends up with the following Notification:
    [nQSError: 13013] Init block, 'DUAL Num (=3)', has more variables than the query select list.
    As an experiment, I am using the SampleApp, and have generated the following query in Answers:
    SELECT s_0, s_1, s_2, s_3 FROM (
    SELECT
    0 s_0,
    "A - Sample Sales"."Products"."P3 LOB" s_1,
    DESCRIPTOR_IDOF("A - Sample Sales"."Products"."P3 LOB") s_2,
    "A - Sample Sales"."Base Facts"."1- Revenue" s_3
    FROM "A - Sample Sales"
    ) djm ORDER BY 1, 2 ASC NULLS LAST
    Is there a way to issue this query from a database client like SQL Developer? I have setup a database link called obiee via gateways to the ODBC connection, and add @obiee after the table name in the above query, but get the nqs error above.
    Thanks for any help. I can provide addl info if such would be useful.
    Ari

    That would explain my difficulty. I went to the link:
    http://download.oracle.com/docs/cd/E14571_01/relnotes.1111/e10132/biee.htm#CHDIFHEE
    that describes the deprecated client. I am confused by the statement there:
    "one of the many widely available third-party ODBC/JDBC tools to satisfy previous NQClient functionality"
    My understanding of ODBC is not very strong, so please bear with me. The statement above is talking about a client tool, but there also needs to be an ODBC driver available for BI Server. I may be a bit confused between driver and client, and what piece nq handles. I don't understand the comment about third-party ODBC when it comes to the driver piece, as that must be coded for BI Server. When I go to create a new DSN, the only driver that makes sense for me to select is:
    Oracle BI Server 11g_OH<id>
    That all works fine to setup a DSN. But I am unable to connect from Excel, etc., using this DSN.
    Put more simply, should I be able to connect to BI Server via ODBC - not using the deprecated BI ODBC client, but using other clients? Is there ODBC connectivity at all in OBIEE 11g?
    Thanks,
    Ari

  • Save waveform data with corresponding time array to spreadsheet file

    Hello to all
    I am looking for a possibility to save voltage data acquired and displayed continousely to a spreadsheet file. Acquiring and displaying is not a big problem, but I have troubble to save the data with the corresponding timestamp with more than seconds accuracy. I am acquiring the data at 1000 Hz and read packages of 500 samples. The recording time should not be limited, it will be started and stopped manually. In fact the acquired voltage data first has to be scaled and than saved. If I use the 'WRITE WAVEFORM TO SPREADSHEET' VI the time column is in seconds' accuracy and that is not what I need. If I create my own time array from the waveform parameters I have timing problems. The accuracy in time should reflect the sample rate. I will attach one of my not very successful trials for better understanding.
    please help a newcomer!
    Thanks in advance
    Thomas
    LV 7.1
    Attachments:
    display & record forces v.1.0.vi ‏181 KB

    I usually follow a simple method to generate time stamp
    I keep sampling rate and number of samples to read as equal and do as shown in the attached VI
    I also remember building a VI with a small modification to this time stamp generation logic to cater to acq, where number of samples to read is half/one fourth/one tenth of sampling rate specified, but cannot find that VI.
    Hope this helps
    Regards
    Dev
    Attachments:
    Acq_DAQmx_filesave_time stamp.vi ‏121 KB

Maybe you are looking for