Flatten a Key Value Pair...how many joins are too many?

Hello,
So, a product can have many attributes...things that describe the product. In our 3rd party ERP, these are stored in a key-value manner.
product_code
attribute_code
attribute_value
etc.
Now, for some products there are 150+ attributes....you can pretty much guess where this is going...
User wants a report that shows an product_code and it's attributes on a single line (in separate columns) for Excel manipulation(s).
So, the SQL would require joining the same attribute table as many times as there are distinct attribute_codes for a given product_code.
If there are 150 named/distinct attributes that need to be lined up, this would mean 150 joins on that one table.
OR write scalars for each attribute
OR write a function that fetches the attribute_value when you pass the product_code and attribute_code and call this function 150 times in the SQL select list.
Yes, I know, I should benchmark each approach and select the one that works best....BUT, I would like to poll the wisdom of outstanding individuals on this group to see which of the 3 approaches would be preferred.
Oh and the users typically "query" hundreds to thousands of products and want this result set.
We are still on the terminally supported Oracle 10g database on Linux.
Thanks,
Manish

Marc mentioned it already
with
eav as
(select 1 + mod(level,trunc(dbms_random.value(5,20))) product_code,
        trunc(dbms_random.value(1,500)) attribute_code,
        dbms_random.string('u',dbms_random.value(1,10)) attribute_value
   from dual
connect by level <= 50
select csv
  from (select product_code,
               'name' att_type,
               product_code||',attribute codes,'||listagg(to_char(attribute_code),',') within group (order by attribute_code) csv
          from eav
         group by product_code
        union all
        select product_code,
               'value' att_type,
               product_code||',attribute values,'||listagg(to_char(attribute_value),',') within group (order by attribute_code)
          from eav
         group by product_code
order by product_code,att_type
CSV
1,attribute codes,13,299,476
1,attribute values,LOCO,FKEKQ,UQHBYITKZ
2,attribute codes,66,72,121,126,198,307,346
2,attribute values,DJBBK,FVBYYBPQ,LCHQ,BCFYN,ZP,UYWDSGFEJ,CZ
3,attribute codes,32,101,213,352,369,449,499
3,attribute values,XKYBDRKPY,RZBU,RWQN,FVCQKWL,N,HCYTLHN,HCHXQLSU
4,attribute codes,116,210,244,307
4,attribute values,FKCMZCIJ,BAWZV,RCTDQLRE,CF
5,attribute codes,89,144,283,293,389
5,attribute values,YK,CEEAEFX,JEEZLJ,XESPFSWN,TRNYF
6,attribute codes,183,435,449
6,attribute values,CZYGEDPH,QEN,HO
7,attribute codes,282,333,358,373
7,attribute values,GRIY,ZCS,FGFQKEPQ,VITJKBNU
8,attribute codes,180,195,374
8,attribute values,UJPNIOGYS,GNWXLMB,XSFHO
9,attribute codes,30,103,216,485
9,attribute values,FJB,VXQHBYIX,RNZGRDBK,I
10,attribute codes,234
10,attribute values,VKCDNJ
11,attribute codes,27
11,attribute values,QDQHQHGD
12,attribute codes,51,101,223,333
12,attribute values,UMJXWTRLCI,XHSPFNFAX,FNFDEBGAYI,INBNTICY
13,attribute codes,298
13,attribute values,RQOS
14,attribute codes,270,480
14,attribute values,TMWSSNZNXT,PRLODAMEJ
16,attribute codes,297
16,attribute values,CITFASX
Regards
Etbin

Similar Messages

  • How many photos are too many photos for iPhoto? When do I need to upgrade to Aperture?

    How many photos are too many photos for iPhoto? When do I need to updgrade to Aperture?

    Hi Terrence,  this is what happens, all too often when I click on something, sometimes it takes me 5 more clicks on the mouse while moving the cursor around to get rid of it. The one below popped up as I clicked on my junk mail.
    The next one popped up when I was trying to get on to a site. This happens all the time. Don't know why it is happening. It happens on iPhoto, Aperture, everywhere I go, FaceBook...everywhere and at any time. So frustrating. I don't know if this is normal, but this was happening just before my last mac died a year ago.

  • List of Value: Best practice when there are too many rows.

    Hi,
    I am working in JDev12c. Imagine the following scenario. We have an employee table and the organization_id as one of its attributes. I want to set up a LOV for this attribute. For what I understand, if the Organization table contains too many rows, this will create an extreme overhead (like 3000 rows), also, would be impossible to scroll down in a simple LOV. So, I have decided the obvious option; to use the LOV as a Combo Box with List of Values. Great so far.
    That LOV will be use for each user, but it doesn't really depend of the user and the list of organization will rarely change. I have a sharedApplicationModule that I am using to retrieve lookup values from DB. Do you think would be OK to put my ORGANIZATION VO in there and create the View Accessor for my LOV in the Employees View?
    What considerations should I take in term of TUNING the Organization VO?
    Regards

    Hi Raghava,
    as I said, "Preparation Failed" may be (if I recall correctly) as early as the HTTP request to even get the document for indexing. If this is not possible for TREX, then of course the indexing fails.
    What I suggested was a manual reproduction. So log on to the TREX host (preferrably with the user that TREX uses to access the documents) and then simply try to open one of the docs with the "failed" status by pasting its address in the browser. If this does not work, you have a pretty good idea what's happening.
    Unfortunately, if that were the case, this would the be some issue in network communications or ticketing and authorizatuions, which I can not tell you from here how to solve.
    In any case, I would advise to open a support message to SAP - probably rather under the portal component than under TREX, as I do not assume that this stage of a queue error has anything to do with the actual engine.
    Best,
    Karsten

  • Memory usage Adobe Photoshop Elements 9 - How many images are too many?

    My wife and I are having a discussion about the memory allocation of Adobe PSE 9.
    We run both of our profiles logged in on a 27" iMac top end from May, 2010, with 12 GB of RAM.
    She had a number of pictures open (let's say 20 - 25) that she was working on editing.  Trying to shut the program down or work within it, the spinning wheel of death made a cameo appearance and brought the system to a crawl.
    I ran Activity Monitor and it said that she had 746 MB of memory free out of 7.39 GB active and 11.27 GB used.  With each image I closed, I gained an additional 5 - 7 MB of free memory.
    Once I force quit PSE 9, that number jumped to nearly 2.5 GB free.
    For those with experience using PSE 9, can you give me an indication:
    How many images do you have open that you're working with at the same time?
    For those people with more Mac OS X experience, would having 750 MB free out of 12 GB reduce the computer to a crawl?
    Thanks in advance.

    As a general rule of thumb (some will disagree with me) if your machine has about 500MB of Free RAM or less this will slow down the computer significantly. Also as a general rule you can never have too much RAM. One nice thing about your machine is it can be upgraded up to as much as 32GB of RAM however 8GB chips are EXTREMELY expensive right now and currently only OWC sells a kit. You can upgrade to 16GB for a lot less, whether you need it or not no one here can say for sure. I would continue to keep an eye on Activity Monitor and keep my first sentence in the back of your mind.
    Roger

  • TS3276 how many addressees are too many for the iCloud server?

    I want to send a collective email. How many addresses will Icloud accept?

    When working with lots of shorter sequences, i find it much easier to negotiate around the different scenes etc.
    If you label each sequence correctly, or logically... ie- intro, car chase, love scene etc, you can jump there quicker than scrolling along a huge timeline...
    Also, consider processor load... FCP works alot quicker and efficiently with shorter sequences, as it's not having to calculate the content of a huge timeline... using the waveform display for example.... if you have the waveform visible, it can take FCP ages to reveal the graph down a long timeline, whereas, with a shorter timeline, it's virtually instant.
    The main thing to look at during your edit process is organisation... you want to work smoothly, efficiently, and with as little anxiety as possible... organising your sequences for the initial assembly edit can help keep a clear head.
    Another advantage of using lots of different sub-sequences is 'un-seen error'... Even the most experienced, Uber-FCP-Masters make mistakes... and when working on shorter sequences, it's much easier to see those mistakes... or rather, mistakes jump out at you more obviously... When editing really long sequences, sometimes a weird slip, or slight momentary laps of concentration can cause something to mess up right down the other end of the timeline... and it goes un-noticed...
    When you have finished all your 'sub-sequences', copy and paste into one master sequence... do not nest the sub-sequences into the master...
    all the best

  • How many DADs are too many?

    I've read on mod_plsql performance tuning pages that fewer is better but how much of a hit does another DAD really cause? I'm wondering if we could stop running multiple applications under a DAD and break them up by application. Say we went from 6 to 20 as an example.

    Each DAD requires Apache to set up a handler struct for the DAD - and if the DADs are configured with connection pools, additional memory for these are also needed.
    So in that respect a DAD is no different than a mod_perl location handler, or a mod_php location handler, and so on. Same basic Apache configuration, performance and scalability rules apply.
    That said, I have 37 DADs on a production Apache server (web front-end to several RACs, and SE and EE databases). The little web server is a dual core AMD Sunfire server. And not showing any strain providing 100's of users access to a large number of Apex applications.

  • How many photos are too many to import at once?

    My good ol' iPhoto 6.0.6  crashes these days when I try to import photos from my iPhone to my MacBook. I inadvertantly have built up to more than 4000+ photos on phone (at least 3000 already on iPhoto but I did not delete from phone) so trying to get most recent set imported (using 'only import non-duplicates). (And yes, will want to delete a good chunk off of phone once I know they are safely stored elsewhere, and backed up, so that iPhoto doesn't have to search and compare thousands at a time.)
    I suspect I am out of luck bc. I see that I already have 12 890 photos (38.GB) in iPhoto. Using OX 10.6.8, 1GB ram and have 156 GB available. Is my set-up too old/small to handle such a big import all at once? It seems to quit part way into the import and when I open iPhoto the next time there are files to be recovered. I wanted to upgrade to the newest iPhoto but I think my OS doesn't support it. I know I will have to upgrade my whole computer soon but had hoped among other things to get these photos sorted with what I have now.
    Any help gratefully appreciated.

    With 1 gig of Ram yes it will be very easy to overwhelm the import process.
    Use Image Capture (in your Applications Folder) to get the files from your phone to a folder on the desktop, then import from there in batches.
    Regards
    TD

  • How to combine large number of key-value pair tables into a single table?

    I have 250+ key-value pair tables with the following characteristics
    1) keys are unique within a table but may or may not be unique across tables
    2) each table has about 2 million rows
    What is the best way to create a single table with all the unique key-values from all these tables? The following two queries work till about 150+ tables
    with
      t1 as ( select 1 as key, 'a1' as val from dual union all
              select 2 as key, 'a1' as val from dual union all
              select 3 as key, 'a2' as val from dual )
    , t2 as ( select 2 as key, 'b1' as val from dual union all
              select 3 as key, 'b2' as val from dual union all
              select 4 as key, 'b3' as val from dual )
    , t3 as ( select 1 as key, 'c1' as val from dual union all
              select 3 as key, 'c1' as val from dual union all
              select 5 as key, 'c2' as val from dual )
    select coalesce(t1.key, t2.key, t3.key) as key
    ,      max(t1.val) as val1
    ,      max(t2.val) as val2
    ,      max(t3.val) as val3
    from t1
    full join t2 on ( t1.key = t2.key )
    full join t3 on ( t2.key = t3.key )
    group by coalesce(t1.key, t2.key, t3.key)
    with
      master as ( select rownum as key from dual connect by level <= 5 )
    , t1 as ( select 1 as key, 'a1' as val from dual union all
              select 2 as key, 'a1' as val from dual union all
              select 3 as key, 'a2' as val from dual )
    , t2 as ( select 2 as key, 'b1' as val from dual union all
              select 3 as key, 'b2' as val from dual union all
              select 4 as key, 'b3' as val from dual )
    , t3 as ( select 1 as key, 'c1' as val from dual union all
              select 3 as key, 'c1' as val from dual union all
              select 5 as key, 'c2' as val from dual )
    select m.key as key
    ,      t1.val as val1
    ,      t2.val as val2
    ,      t3.val as val3
    from master m
    left join t1 on ( t1.key = m.key )
    left join t2 on ( t2.key = m.key )
    left join t3 on ( t3.key = m.key )
    /

    A couple of questions, then a possible solution.
    Why on earth do you have 250+ key-value pair tables?
    Why on earth do you want to consolodate them into one table with one row per key?
    You could do a pivot of all of the tables, without joining. something like:
    with
      t1 as ( select 1 as key, 'a1' as val from dual union all
              select 2 as key, 'a1' as val from dual union all
              select 3 as key, 'a2' as val from dual )
    , t2 as ( select 2 as key, 'b1' as val from dual union all
              select 3 as key, 'b2' as val from dual union all
              select 4 as key, 'b3' as val from dual )
    , t3 as ( select 1 as key, 'c1' as val from dual union all
              select 3 as key, 'c1' as val from dual union all
              select 5 as key, 'c2' as val from dual )
    select key, max(t1val), max(t2val), max(t3val)
    FROM (select key, val t1val, null t2val, null t3val
          from t1
          union all
          select key, null, val, null
          from t2
          union all
          select key, null, null, val
          from t3)
    group by keyIf you can do this in a single query, unioning all 250+ tables, then you do not need to worry about chaining or migration. It might be necessary to do it in a couple of passes, depending on the resources available on your server. If so, I would be inclined to create the table first, with a larger than normal percent free, then do the first set as a straight insert, and the remaining pass or passes as a merge.
    Another alternative might be to use the approach above, but limit the range of keys in each pass. So pass one would have a predicate like where key between 1 and 10 in each branch of the union, pass 2 would have key between 11 and 20 etc. That way everything would be straight inserts.
    Having said all that, I go back to my second question above, why on earth do you want/need to do this? What is the business requirement you are trying to solve. There might be a much better way to meet the requirement.
    John

  • How can we create a look-up in Enterprise Gateway.. like key value pair..???

    How can we create a look-up in Enterprise Gateway.. like key value pair..???

    Hi,
    You want to have a look at KPS, Key Property Store. Link: Key Property Stores
    Cheers,
    Stefan

  • Mapping unique elements to a Key Value Pair using XSLT in ESB

    Hi Guys,
    I am in need of a solution for mapping some of the response elements to a a key value pair in my target schema. How could I achieve this. It is very very urgent. How will the XSL look like
    Source
    <Source>
    *<Element1>One</Element1>*
    *<Element2>Two</Element2>*
    <Action>Manage</Action>
    </Source>
    Target
    <Target>
    <Action>Manage</Action>
    <AdditionalData>
    *<KeyValuePair>*
    *<key>Element1</key>*
    *<value>One</value>*
    *</KeyValuePair>*
    *<KeyValuePair>*
    *<key>Element2</key>*
    *<value>Two</value>*
    *</KeyValuePair>*
    </AdditionalData>
    </Target>
    Edited by: user13156113 on May 25, 2010 7:01 AM

    Below is the soultion which I finally did it by myself. Any other solutions would be welcome.
    <ns10:AdditionalData>
    <xsl:for-each select="//node()">
    <xsl:if test="text()">
    <ns16:KeyValuePair>
    <ns16:Key>
    <xsl:value-of select="xp20:upper-case(name(.))"/>
    </ns16:Key>
    <ns16:Value>
    <xsl:value-of select="."/>
    </ns16:Value>
    </ns16:KeyValuePair>
    </xsl:if>
    </xsl:for-each>
    </ns10:AdditionalData>

  • Application specific key-value pairs in jndi.properties

    Hello,
    Can I specify my application specific key-value pair in jndi.properties?
    I tried something like this
    java.naming.factory.initial=.jndi.WLInitialContextFactory
    java.naming.provider.url=t3://localhost:7001
    myVar=myVal
    When i tried looking up "myVar" from my client program, I got an error.
    The other parameters like weblogic.jndi.WLInitialContextFactory are picked up.
    Anyhelp will be appreciated
    Vasim

    We have a similar problem.
    We would like to configure our PROVIDER_URL for a specific web application - not
    for the entire server. Since the URL should be different in development, test
    and production environments, we would prefer to just set it in the deployment
    descriptor. And we have a lot of code that just uses
    ctx = new InitialContext();
    when looking up EJBs, queues etc.
    Actually, to take the problem one step further, it should be expected that later
    we will have EJB's deployed on different machines/clusters - so we will actually
    need specific urls for each EJB.
    Is there a good way to do this? Or will we have to custom-develop our own jndi
    configuration standard using application parameters to set which JNDI provider
    each EJB should be looked up with?
    Alternativaely, can we "import" the JNDI trees of the app server in the JNDI tree
    of the web servers?
    So, how should we go about this?
    Robert Patrick <[email protected]> wrote:
    Vasim wrote:
    Hi Robert,
    You are right. But The object "myVar" which I am trying to look upis not in
    the JNDI tree nor am I interesed in binding it . But my requirementis that
    I have one application specific variable which I am trying to lookup and I
    dont want to have a separare config file for this..and hence the question..So, put the properties you want in the jndi.properties file and load
    the properties
    file from your code by doing something like this:
    Properties props = new Properties();
    ClassLoader cl = Thread.currentThread().getContextClassLoader();
    if (cl == null)
    cl = System.getSystemClassLoader();
    InputStream is = cl.getResourceAsStream("jndi.properties");
    props.load(is);
    Personally, I would not use this file and would create an application-specific
    file
    or, as Daniel suggested, define your properties as a System property
    and use
    System.getProperty("myVar").
    btw, is jndi.properties only for those objects which are bound to jnditree?
    jndi.properties is only used for creating the JNDI InitialContext. The
    whole idea
    of this file is that in remote client code (without the jndi.properties
    file), you
    need to do something like this to tell the JNDI classes how to connect
    to the JNDI
    provider:
    Properties props = new Properties();
    props.put(Context.PROVIDER_URL, "t3://myservername:7001");
    props.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
    InitialContext ctx = new InitialContext(props);
    but inside the server, you only need to do this because the server is
    the provider
    and already knows how to connect to itself:
    InitialContext ctx = new InitialContext();
    Therefore, the jndi.properties file allows you to externalize this property-setting
    code that sets up the properties to be passed to the InitialContext constructor
    so
    that the remote client code can now look exactly like the code inside
    the server.
    The InitialContext constructor will look for this jndi.properties file
    in your
    classpath and load it to get the necessary configuration information
    to determine
    how to connect to the JNDI provider.
    Hope this helps,
    Robert

  • Key value pair error

    Hello
    how can I fix this problem ?
    SAPNW2004sJavaSP9_Trial\SAP_NetWeaver_2004s_SR_1
    jdkversion 142_09 .
    ERROR 2008-07-09 23:56:30
    CJS-30051  Cannot insert a key value pair into the secure store fails; see output of log file SecureStoreInsert.log: SAP Secure Store in the File System - Copyright (c) 2003 SAP AG
    ERROR 2008-07-09 23:56:30
    FCO-00011  The step insertAdminDataInSecStore with step key |NW_Java_OneHost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|1|0|NW_CI_Instance|ind|ind|ind|ind|11|0|NW_CI_Instance_Configure_Java|ind|ind|ind|ind|3|0|insertAdminDataInSecStore was executed with status ERROR .
    Thanks
    sas

    Hi this the content of  SecureStoreInsert .
    com.sap.security.core.server.secstorefs.NoEncryptionException: Encryption or decryption is not possible because the full version of the SAP Java Crypto Toolkit was not found (iaik_jce.jar is required, iaik_jce_export.jar is not sufficient) or the JCE Jurisdiction Policy Files don't allow the use of the "PbeWithSHAAnd3_KeyTripleDES_CBC" algorithm.
    at com.sap.security.core.server.secstorefs.SecStoreFS.openExistingStore(SecStoreFS.java:1975)
    at com.sap.security.core.server.secstorefs.SecStoreFS.handleInsert(SecStoreFS.java:963)
    at com.sap.security.core.server.secstorefs.SecStoreFS.main(SecStoreFS.java:1276)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.sap.engine.offline.OfflineToolStart.main(OfflineToolStart.java:81)
    Caused by: java.lang.SecurityException: The provider IAIK may not be signed by a trusted party
    at javax.crypto.SunJCE_b.a(DashoA12275)
    at javax.crypto.Cipher.a(DashoA12275)
    at javax.crypto.Cipher.getInstance(DashoA12275)
    at com.sap.security.core.server.secstorefs.Crypt.<init>(Crypt.java:220)
    at com.sap.security.core.server.secstorefs.SecStoreFS.<init>(SecStoreFS.java:1346)
    at com.sap.security.core.server.secstorefs.SecStoreFS.handleInsert(SecStoreFS.java:954)
    ... 6 more

  • Is it possible to load 1 billion key-value pairs into BerkeleyDB database?

    Hello,
    I experiment with loading huge datasets into BerkeleyDB database. The procedure is as follows:
    1. Generate a dump-like file using a script. The file contains key-value pairs (on separate lines, exactly in the format of the dump file, that can be produced by db_dump). The index is hash.
    2. Use db_load to create a database. The OS is windows server 2003.
    Both key and values are 64-bit longs.
    Using this procedure, I succeeded to load 25 million pairs in the database. It took about 1-2 hours.
    Next, I tried to load 250 million pairs into an empty database. db_loader runs already 15 hours. It's memory consumption is very low: private bytes ~2M, working set ~2M, virtual size ~13M. db_loader already read all the pairs from disk, as IO is very low now: ~4M per second. I am not sure if db_loader will finish in next 24h hours.
    My goal is to load eventually 3 billion key-value pairs into one DB.
    I will appreciate if someone will advise me:
    1. If BerkeleyDB is capable of dealing with such database volume.
    2. Is my procedure good, how to optimize it. Is it possible to allocate more RAM to db_load? Are there other ways to optimize loading time?
    Thank you,
    Gregory.

    Hello Sandra,
    The version is: Berkeley DB 5.0.21: (March 30, 2010).
    The data: keys and values are random 64 bit numbers.
    The header of the "dump" file that I am trying to load is (there are 256 * 1e6 key-value pairs in the file):
    VERSION=3
    format=bytevalue
    type=hash
    h_nelem=512000000
    db_pagesize=8192
    HEADER=END
    The db_load allocates 1G memory cache.
    Thank you,
    Gregory.

  • Multiple key-value pairs in JNLP file

    Dear All,
    I have a JNLP file with multiple key-value pairs :
    <resources>
         <j2se version="1.4+"/>
         <jar href="lib/myFile.jar"/>
         <property name="url" value="URL/db"/>
         <property name="default" value="10"/>
         <property name="m1_query" value="URL1"/>
         <property name="m2_query" value="URL2"/>
         <property name="m3_query" value="URL3"/>
         <property name="p1_query" value="URL4"/>
         <property name="p2_query" value="URL5"/>
         <property name="p3_query" value="URL6"/>
         <property name="p4_query" value="URL7"/>
       </resources> I dont know what the key names are in the JNLP file so I cant use getProperty(keyname) directly. Is it possible to read all the key-value pairs in a HashMap and iterate through the list?
    Or is there any other way of dealing with it?
    Many thanks in advance.
    Regards
    Anuj

    Hi Riem,
    Thanks for your help ...
    while waiting for a reply ... I managed to do sort out my problem.
                     Properties p = System.getProperties();
                     for (Enumeration enu = p.propertyNames() ; enu.hasMoreElements() ;) {            
                       String key = (String)enu.nextElement();
                       Object value = p.getProperty(key);
                       if(!value.equals("")) {
                              keyValues.put(key, value);
                     }Thanks for your help.
    Cheers
    Anuj

  • Cannot insert a key value pair into the secure store fails; see output of l

    Hi,
    how can I fix this problem ?
    SAPNW2004sJavaSP9_Trial\SAP_NetWeaver_2004s_SR_1
    jdkversion 142_09 .
    ERROR 2008-07-10 13:13:31
    CJS-30051  Cannot insert a key value pair into the secure store fails; see output of log file SecureStoreInsert.log: SAP Secure Store in the File System - Copyright (c) 2003 SAP AG
    Regds
    sas

    Hi Arzu,
    thank you for your replying.
    The current OS I am using is Microsoft Windows XP
    Service Pack 2.
    The very last installation was made with JDK version 142_12.
    However it was pointless. I can try to reinstall with
    the mentioned newest JCE policy files .
    Can tell me from where I can obtain these above
    JCE policy files ?
    Regards
    Erdem Sas

Maybe you are looking for