Best workaround for querying across multiple Data Sets?!

Hi folks
Today I was migrating my older OEID 3.0 applications to 3.1, and I noticed some of my older version views are not working anymore in new version.
We used to have multiple Bulk Add/Replace (without specifying Collection Keys) and we could use any attributes from any of these, in a certain View for example.
My views were like SELECT SUM("an attribute/metric from Bulk Add_1") / SUM("an attribute/metric from Bulk Add_2")
Now that you have to specify a FROM clause in your views, and it has to be from a certain Data Set, whats the best way to achieve above line goal?
Bests,

Patrick
What I meant by Cross Join is Cartesian Product in situation that you have many-to-many relations between 2 Data Sets
Lets say, I have to different data, coming from 2 totally different source, one from Sales Dept and the other one from Purchase Dept:
Sales Table:
Part Number
Sales QTY
Sales Date
Part Type
Manufacturer
Country
0001
70
10/5/2012
TYPE1
Manuf1
US
0001
120
10/6/2012
TYPE1
Manuf1
US
0001
350
10/7/2012
TYPE1
Manuf1
US
0002
100
10/8/2012
TYPE2
Manuf2
US
0002
80
10/9/2012
TYPE2
Manuf2
CA
0003
2500
10/10/2012
TYPE3
Manuf3
CA
0004
180
10/11/2012
TYPE4
Manuf4
US
Purchase Table:
Part Number
Purchase QTY
Purchase Date
Part Type
Manufacturer
Country
0001
50
10/5/2012
TYPE1
Manuf1
US
0001
60
10/6/2012
TYPE1
Manuf1
US
0001
100
10/7/2012
TYPE1
Manuf1
US
0001
200
10/8/2012
TYPE1
Manuf1
US
0002
1100
10/9/2012
TYPE2
Manuf2
US
0003
20
10/10/2012
TYPE3
Manuf3
US
What is the preferred approach to ingest this data?

Similar Messages

  • Matching across multiple character sets

    Would like to know whether anyone has attempted matching across multiple character sets, for example, between English and Japanese: what are the pitfalls to avoid, what are the best practices, and what you would like to see from application/tools perspective as an ideal solution. thanks

    If you upgrade to Logic Pro, you'll get WaveBurner as part of the package which helps you do this, including tweaking your pauses between tracks, fades etc.
    If you have Toast, you can do it there too.
    If you don't have any 3rd. party software, the work around would be to assemble all your songs in order, end to end in a new Logic file, and listen to all your tracks and adjust the relative levels between songs, then bounce out the individual tracks which have volume changes with their new volume settings. Finally you could then use any burning app such as [SimplyBurns|http://bit.ly/c1oglP] to create CDs or bounce them out in Logic with the additional .mp3 option.
    Obviously it's important to listen to your material in order, in context, as some songs will be at the wrong subjective level depending on the tracks either side in the placement. This isn't really important in digital distribution where your material probably won't be listened to as a whole, but as individual downloads.

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • What is the best program for panning across an illustration for an App i am working on?

    What is the best program for panning across an illustration for an App i am working on?

    What is the best program for panning across an illustration for an App i am working on?

  • Target Spry RowID on page with Multiple data sets from another page

    Hi all,
    I am trying to target a specific data item, on a page with
    multiple data sets, from a link on another page. (I also have to
    pass the link through Flash, but lets start with the simple
    part...)
    You can take a look at the site in progress here:
    http://www.3andband.com/TestSite/iframeTest3.html
    From the Home page I want to link to specific news or concert
    items on the News page
    I have been trying to get SpryURLUtils to do it but I can't
    seem to get it working.
    Any help would be greatly appreciated.
    Thanks!
    Ben

    did u try if it even passes the row value?? with a simple
    alert? alert(params.row)
    Also maby u need to reorder the scripts to this;
    <script src="../SpryAssets/SpryURLUtils.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/xpath.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/SpryData.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/SpryCollapsiblePanel.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/SpryEffects.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/SpryAccordion.js"
    type="text/javascript"></script>
    and your js script
    var params = Spry.Utils.getLocationParamsAsObject();
    var dsConcerts = new
    Spry.Data.XMLDataSet("includes/concerts.xml", "Concerts/concert");
    dsConcerts.setColumnType("image", "image");
    var dsNews = new Spry.Data.XMLDataSet("includes/news.xml",
    "News/item");
    //Set an observer so that when the data is loaded, we update
    the current row to the url param value
    dsNews.addObserver({ onPostLoad: function(ds, type) {
    dsNews.setCurrentRow(params.row); }
    function MM_effectBlind(targetElement, duration, from, to,
    toggle)
    Spry.Effect.DoBlind(targetElement, {duration: duration,
    from: from, to: to, toggle: toggle});
    So url params get loaded before the data

  • Joining multiple tables across multiple data sources in MYSQL throws error

    Hi all,
    I have to join tables across multiples MYSQL databases
    eg :
    Table T1,T2 from DB1
    Table T3 from DB2
    Columns from T3 are aggregated ( dimensional column ).
    Hence when i query using oracle answers ( say i have columns from T1,T2 and T3 ) , OBIEE tries to a query DB1 by having NULL for columns in T3
    It uses cast(NULL as INTEGER ) for columns from T3 . But MYSQL throws an error because "cast(NULL as INTEGER )" is not supported by MYSQL.
    I tried executing the query generated by OBIEE manually , if i try giving cast(NULL as SIGNED INTEGER ) , it works.
    pasting the query for reference
    select distinct D1.c2 as c1,
    D1.c1 as c2,
    cast(NULL as INTEGER ) as c3
    from
    (select distinct T87.title_id as c1,
    T59.asin as c2
    from
    print_book_catalog T59 left outer join
    title_authority T87 On T59.asin = T87.asin
    where ( T59.asin = '0345378911' )
    ) D1
    I am not able to find any config file to set this property ( and use signed integer instead of integer ) . Even DBFeatures.INI does not have relevant information.
    Can someobody please help me solve this problem.

    Stijn,
    Thank you for the article link. That was very helpful! It seems that I had a few things off as you do need the "This source should be combined with other sources at this level." checked. In my two table source columns for DATA_SOURCE I defined a literal ('086496' and '085597' for the other) in the Column Mapping tab. I pasted the following in the Fragmentation content, checking the "This source..." box on the Content tab:
    eSIS.SANDBOX4_SCHOOLS.DATA_SOURCE = '086496'
    And pasted the following into the WHERE clause, checking "Select distinct values" on the Content tab:
    sandbox4."".OBIEE.NWOCA_SCHOOLS.SCHOOL_CODE = VALUEOF(NQ_SESSION."SCHOOL") AND sandbox4."".OBIEE.NWOCA_SCHOOLS.DATA_SOURCE = VALUEOF(NQ_SESSION."GROUP")
    This took care of my user's security, utilizing the session variables in the WHERE clause. I am now able to generate reports that only one user can access from one data source and share that same report with another user who can only see data from the other data source.
    Many thanks!!!

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Multiple data sets: a common global dataset and per/report data sets

    Is there a way to have a common dataset included in an actual report data set?
    Case:
    For one project I have about 70 different letters, each letter being a report in Bi Publisher, each one of them having its own dataset(s).
    However all of these letters share a common standardized reference block (e.g. the user, his email address, his phone number, etc), this common reference block comes from a common dataset.
    The layout of the reference block is done by including a sub-llayout (rtf-file).
    The SQL query for getting the dataset of the reference block is always the same, and, for now, is included in each of the 70 reports.
    Ths makes maintenance of this reference block very hard, because each of the 70 reports must be adapted when changes to the reference block/dataset are made.
    Is there a better way to handle this? Can I include a shared dataset that I would define and maintain only once, in each single report definition?

    Hi,
    The use of the subtemplate for the centrally managed layout, is ok.
    However I would like to be able to do the same thing for the datasets in the reports:
    one centrally managed data set (definition) for the common dataset, which is dynamic!, and in our case, a rather complex query
    and
    datasets defined on a per report basis
    It would be nice if we could do a kind of 'include dataset from another report' when defining the datasets for a report.
    Of course, this included dataset is executed within each individual report.
    This possibility would make the maintenance of this one central query easier than when we have to maintain this query in each of the 70 reports over and over again.

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Query from multiple data blocks

    Hello professionals,
    I having difficulty in finding solutions to query data from multiple data blocks. By the way I'm using Oracle Forms 10g
    I have 4 data blocks and all items are database items
    Below are the blocks,
    1. student_main (columns : student_id, student_name, student_state)
    2. subject_main (columns : subject_id, subject_desc, subject_student_id)
    3. invoice_main (columns : invoice_no, invoice_amt, invoice_student_id)
    4. receipt_main (columns : receipt_no, receipt_amt, receipt_invoice_ref)
    The form will need to allow user to be able to query from any of the items/fields from the data blocks as listed above
    For eg,
    : if user query student_id then all related value items from table subject_main, invoice_main, receipt_main need to be displayed
    : if user query invoice_no then all related value items from table student_main, subject_main, receipt_main need to be displayed
    However the tricky part is that some student might have or might not have invoice / receipt data
    I tried using relations but it didn't work both ways
    I even create relations in every table (student_main to subject_main, subject_main to student_main, invoice_main to student_main, student_main to invoice_main and etc.) but it just make my running process hang
    Need help on this matter
    Thank you
    Edited by: 990092 on Feb 25, 2013 12:15 AM

    990092 wrote:
    Hello professionals,
    I having difficulty in finding solutions to query data from multiple data blocks. By the way I'm using Oracle Forms 10g
    I have 4 data blocks and all items are database items
    Below are the blocks,
    1. student_main (columns : student_id, student_name, student_state)
    2. subject_main (columns : subject_id, subject_desc, subject_student_id)
    3. invoice_main (columns : invoice_no, invoice_amt, invoice_student_id)
    4. receipt_main (columns : receipt_no, receipt_amt, receipt_invoice_ref)
    The form will need to allow user to be able to query from any of the items/fields from the data blocks as listed above
    For eg,
    : if user query student_id then all related value items from table subject_main, invoice_main, receipt_main need to be displayed
    : if user query invoice_no then all related value items from table student_main, subject_main, receipt_main need to be displayed
    However the tricky part is that some student might have or might not have invoice / receipt data
    I tried using relations but it didn't work both ways
    I even create relations in every table (student_main to subject_main, subject_main to student_main, invoice_main to student_main, student_main to invoice_main and etc.) but it just make my running process hang
    In general, relation will work. your relation creating was wrong, i think.
    do as
    1. student_main master to subject_main details
    2. student_main master to invoice_main details
    3. invoice_main master to receipt_main details
    Hope this helps
    Hamid

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Time utility shell dimension across multiple date dimensions

    Hello,
    I have built a time utility shell dimension based on a solution from Tomislav Piasevoli (book: MDX with SSAS 2012 cookbook) however I have modified it slightly
    In his example he uses two hierarchies from the same dimension. However I have modified it to use two different date dimensions, 1 hierarchy from each of the dimensions. The hierarchies must be able to be filtered by one another or combined
    in the same query. The problem I’m having is my solution does not work for
    distinct count measures. It works for every other type of measure…sum, count of rows etc etc. The filtering works perfectly when one hierarchy is filtered by the other, it’s just the distinct count that won’t show the parallel period.
    I have created the shell dimension the typical way. Creating a named query like below. Creating the dimension with the default value etc etc
    SELECT       
    0 AS ID, 'As is' AS Name
    UNION ALL
    SELECT       
    1 AS ID, 'Year ago' AS Name
    Below is the original MDX and my modified MDX. If anyone can help that would be great!
    --original mdx for one date dimensions.. the below works perfectly for all measures including distinct counts
    SCOPE( [Time Calcs].[Calc].[Year ago]);
    SCOPE([Date].[Date].MEMBERS,
    [Date].[Calendar Year].[Calendar Year].MEMBERS
    This = (ParallelPeriod([Date].[Calendar].[Calendar Year],1,[Date].[Calendar].CurrentMember),
    ParallelPeriod([Date].[Calendar Weeks].[Calendar Year],1,[Date].[Calendar Weeks].CurrentMember),
    [Time Calcs].[Calc].&[0]
    End
    Scope;
    End
    Scope;
    --my modifed mdx for multiple date dimensions.. the below works perfectly for all measures
    except distinct counts
    SCOPE( [Time Calcs].[Calc].[Year ago]);
    SCOPE([Date].[Date].MEMBERS,
    [Date].[Calendar Year].[Calendar Year].MEMBERS
    [Delivery Date].[Date].MEMBERS,
    [Delivery Date].[Calendar Year].[Calendar Year].MEMBERS
    This = (ParallelPeriod([Date].[Calendar].[Calendar Year],1,[Date].[Calendar].CurrentMember),
    ParallelPeriod([Delivery Date].[Calendar Weeks].[Calendar Year],1,[Delivery Date].[Calendar Weeks].CurrentMember),
    [Time Calcs].[Calc].&[0]
    End
    Scope;
    End
    Scope;

    Hi Toro07,
    Thank you for your question. 
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Backend Operation for OK button of Data Set tab of Data Definition

    Hello everyone,
    Please let me know what happens at backend when we click on OK button Data Set tab of Data Definition in Unifier !
    Cheers,
    Gagan

    To only select the first 100 records is rather easy.  Use the addition on your select statement "UP TO X ROWS".  This will only select the first X number of entries for your condition or if you simply say SELECT from any table.  To keep selecting the next 100 could be a bit of a pain, I honestly do not have an exact idea of how to accomplish that.<br><br>However, I would suggest looking into keeping an internal counter, but for purposes of efficiency you would have to start your selection of rows at the last record of the 100.  Perhaps it is best to look into ways of buffering this table before you start doing your selections out of it.  This would be useful if the application will be used a lot.

  • Summarizing Data Across Multiple Dates

    Here is my issues.
    I have two dimensions.
    Date and Time. Date is a standard date dimension. Time is a time dimension who's leaf level is at the second.
    I need the ability to obtain a value like maximum from my fact table for a given set of prices. I want to summarize this value at the day level. The issue is that the value can span across 2 dates.
    Example. A Sunday starts at 4PM and ends Monday at 9AM. I need to obtain the maximum value across these specific times from both Monday and Sunday (at the specified times).
    Effectively, I need MAX(MONDAY at time 12:00AM - 9AM, SUNDAY at time 4PM - 11:59:59PM) to be displayed in the data for Date Monday.
    Suggestions on to approaches for this?

    Guys thanks for your suggestions.  Yes I'll give those options a try...
    Just a pity there was no 'aggregator' type activity which would negate the need to go down the route of writing out to SQL or create master/child runbooks which just seem a bit of overkill or adding complexity esp when using very simple runbooks - like I
    say maybe one for the IP developers out there...
    To illustrate my specific eample (one of many simple runbooks requires a summary email sent out to show output) as below:
    So the first Powershell gets VDI desktop names (3 of them) then 2nd Powershell uses these machine names as published data to get associated registration state.  From there I'd ideally like to just generate out the single summary email showing the 3
    machines/reg states instead of 3 separate emails.  I added the append line and get line activities to expose the output to check it was OK and then tried flattening various activities to try and only generate the single email (I removed the junction activity
    infront of sendmail as it didn't work)
    So I'll try the options suggested for spliiting runbooks and/or writing to SQL and see how that goes..
    Cheers

  • What is the best way for a parent to share/set up an apple id for their children's iPhone if they have an Iphone too?

    So I work for a wireless provider and i often have customers who buy their children iphones. They end up sharing Apple ID's amongst themselves. After they have done this- what is the easiest way to get the content being shared (contacts, pictures, music via icloud) seperated and onto the correct apple id's. Is there a way to get songs purchased on one apple ID duplicated onto another?

    They will be able to download everything that was bought with their Apple ID. If they all shared an ID then they switched to their own ID they won't be able to use any apps or music bought with another Apple ID. As I explained in my previous response. There is only one way to share apps and content across multiple Apple ID's and that is home share. So, NO there isn't a "Smooth" way to do it.

Maybe you are looking for