Hirarchical Queries in Berkeley DB

we are migrating an application from oracle Lite to Berkeley. Application in oracle Lite uses Hierarchical queries (Connect by prior) but seems that Berkeley DB's SQL APIs do not provide any support for Hierarchical queries. So what is its alternate in Berkeley DB?
Thanks.
Edited by: oracle student on Oct 21, 2011 9:19 AM

I have a parent/child relation in single table. Like employee/ manager relation where all the employees have ID and manger_ID field, value for manger_ID field comes from ID field. The hierarchy can go to indefinite levels. Oracle Database provides START WITH.... CONNECT BY PRIOR to query this hierarchy. But not sure how to do this with Berkeley.
Edited by: oracle student on Oct 27, 2011 9:16 AM

Similar Messages

  • How to get primary keys in some order with joins?

    Hi, I build BBS using BDB as backend database, forum database, topic database and post database share one environment. This BBS web application is multi-thread program. If user selects one forum, its topics will be listed in order of last reply time;selecting one topic, posts are listed in order of reply time as well.
    struct forum {
    UInt16 forumID;
    string forumName;
    string _lastPoster;      // who is the last one replied in this forum
    struct topic {
    UInt32 topicID;
    UInt16 forumID; // topic comes from this forum
    string title; // topic title
    UInt64 dateOfLastReply; // when last reply to this topic happen
    struct post {
    UInt64 postID;
    UInt32 topicID; // post comes from this topic
    string title; // post title as of topic
    UInt64 dateOfPost; // when this post is created
    I create one primary database and two secondary databases for topic, primary key is topicID, secondary key are forumID and dateOfLastReply respectively, and I want to show 1st 25 topics in latest reply time order on the 1st browser page, 2nd 25 topics on the 2nd browser page, and etc.
    if using SQL, it will be: SELECT topicID FROM topic WHERE forumID=xx ORDER BY dateOfLastReply DESC
    From performance perspective, I want get all topics id of one same forum, and need them come in reply time order, then retrieve topic one by one based on returned topicID, how can I do this? guess I have to use joins.
    Plus, do you have any suggestion about retrieval performance given the fact that topics retrieval will happen each time browser want to request the next page, that is, 2nd 25 topics of this forum?
    Is DB_DBT_MULTIPLE helpful to me?
    thanks.
    Edited by: tiplip on 2011-1-22 上午5:43
    Edited by: tiplip on 2011-1-22 下午5:52
    Edited by: tiplip on 2011-1-23 下午7:42

    Hi tiplip,
    Bellow I will describe how you can support "SELECT * FROM table WHERE X = key ORDER BY Y" queries using Berkeley DB, which, as you suspected, should be done by using a composite index.
    First of all, think of Berkeley DB as the storage engine underneath an RDBMS. In fact, Berkeley DB was the first "generic data storage library" implemented underneath MySQL. As such, Berkeley DB has API calls and access methods that can support any RDBMS query. However, since BDB is just a storage engine, your application has to provide the code that accesses the data store with an appropriate sequence of steps that will implement the behavior that you want.
    If you have two indices in SQL, each on a single column (call them X and Y), and you do:
    SELECT * FROM table WHERE X = key ORDER BY Y;then there are three plausible query plans:
    (1) scan the whole table, ignore both indices, filter by X = key then sort by Y;
    (2) use the index on Y to scan all rows in the required order, filter by X = key;
    (3) use the index on X, find the matching rows, then sort by Y.
    There are cases where (1) would be fastest, because it has all of the columns from one scan (the other query plans will do random lookups on the primary for each row). This assumes that the data can fit into memory and the sort is fast.
    Query plan (2) will be fastest if the selectivity is moderate to high, looking up rows in the main table is fast, and sorting the rows is very slow for some reason (e.g., some complex collation).
    Query plan (3) will be fastest if the selectivity is small (only a small percentage of the rows in the table matches). This should be the best case for us, making it the best choice in a Berkeley DB key/value application.
    The optimal plan would result from having a composite index on (X, Y), which can return just the desired rows in the desired order. Of course, it does cost additional time and space to maintain that index. But note that you could have this index instead of a simple index on X: it can be used in any query the simple index could be used in.
    Records in Berkeley DB are (key, value) pairs. Berkeley DB supports only a few logical operations on records. They are:
    * Insert a record in a table.
    * Delete a record from a table.
    * Find a record in a table by looking up its key.
    * Update a record that has already been found.
    Notice that Berkeley DB never operates on the value part of a record. Values are simply payload, to be stored with keys and reliably delivered back to the application on demand. Both keys and values can be arbitrary byte strings, either fixed-length or variable-length.
    So, in case of a "SELECT * FROM X WHERE id=Y ORDER BY Z" query, our suggestion, from Berkeley DB's point of view, would be for you to use a composite index (as it would be in SQL), where a string created as X_Y should do the trick, as explained in the following scenario.
    Primary:
        X Y
    1 10 abc
    2 10 aab
    3 20 bbc
    4 10 bba
    5 20 bac
    6 30 cbaSecondary:
    10_aab 2
    10_abc 1
    10_bba 4
    20_bac 5
    20_bbc 3
    30_cba 6If the query looks like this:
    'SELECT * FROM primarydb WHERE X = 10 ORDER by Y'the application can run a cursor on the secondary and begin the loop with the DB_SET_RANGE flag on 10. When iterating with DB_NEXT, this will return:
    2 10 aab
    1 10 abc
    4 10 bbcThe application must check for the end of the range inside the loop, in this case it should stop when it hits 20_bac.
    As in SQL, retrieving by a secondary key is remarkably similar to retrieving by a primary key and the Berkeley DB call will look similar to its primary equivalent.
    tiplip wrote:
    Plus, do you have any suggestion about retrieval performance given the fact that topics retrieval will happen each time browser want to request the next page, that is, 2nd 25 topics of this forum?As you are concerned about the performance, I think this would be the fastest solution. Of course, you can tune the performance at a later time, after you have the functionality in place. What I think you should do first is to increase the cache size and test with a bigger database page size, and maybe to configure the transactional subsystem (in case you use one).
    If you are not very familiar with how to implement the above in BDB, please read the Guide to Oracle Berkeley DB for SQL Developers, available at: http://www.oracle.com/technetwork/articles/seltzer-berkeleydb-sql-086752.html
    You will also need to be familiar with the following documentation:
    Related documentation pages:
    Secondary indexes - http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/am_second.html
    Cursor operations - http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/am_cursor.html#am_curget
    DBcursor->get() - http://download.oracle.com/docs/cd/E17076_01/html/api_reference/C/dbcget.html
    DB_SET_RANGE - http://download.oracle.com/docs/cd/E17076_01/html/api_reference/C/dbcget.html#dbcget_DB_SET_RANGE
    If my answer helps you with your question, please go ahead and rate it as Helpful or Correct, and the forum thread as answered. For each unrelated question, please create a new forum thread.
    Good luck with building your forum application,
    Bogdan Coman
    PS: If you are a BDB licensed customer, you can also use My Oracle Support (https://support.oracle.com) to visit the KM note 1210173.1, that discusses the same topic.

  • Hirarchical JSON Queries on Twitter Data Stream

    Hi,
    The ability to query hierarchical JSON data, and the Azure Table Storage output option are great additions to Stream Analytics.
    I'm currently experimenting with querying into data streams from Twitter.
    Stuff like this works fine:
    -- Get statistics for location
    select
    min (id) as Id,
    [user].location as Location,
    count([user].location) as Total,
    avg([user].followers_count) as AvgFollowers,
    avg([user].favourites_count) as AvgFavourites,
    avg([user].friends_count) as AvgFriends,
    avg([user].listed_count) as AvgListed
    from tweetstream
    where [user].location is not null and [user].location != ''
    group by [user].location, TumblingWindow (minute, 1)
    having Total > 10
    -- Get number of tweets by name
    select [user].screen_name, count (id) as Tweets
    from tweetstream
    group by [user].screen_name, TumblingWindow (minute, 1)
    having Tweets > 1
    -- Select based on text in tweet
    select text
    from tweetstream
    where text like '%food%'
    What I am having issues with is repeating data, such as selecting the hashtags from a tweet.
    There can be zero or more hashtags in a tweet, the JSON looks like this:
    "created_at":"Fri Nov 28 21:42:41 +0000 2014",
    "id":538447914268639232,
    // Deleted
    "entities":
    "hashtags":[
    "text":"fall",
    "indices":[33,38]
    "text":"confessionsofaprchic",
    "indices":[39,60]
    "trends":[],
    "urls":[],
    "user_mentions":[],
    "symbols":[]
    // Deleted
    Is there a way to select the hashtags for a tweet?
    If not, is it something that will be possible in the future?
    Regards,
    Alan
    Free e-book: Windows Azure Service Bus Developer Guide.

    Today you can't access contents of the array in the query. We will be extending query language to allow flattening of arrays for further processing.
    For example, in your scenario, you will be able to transform rows with tweets containing array of hashtags into multiple rows with individual hashtags.
    This should be available soon.

  • Berkeley DB Java Edition 3.3.69 is available

    Berkeley DB Java Edition (JE) 3.3.69 is now available for download. The release contains a number of bug fixes, some to problems posted on the OTN forum. The full list of changes may be found in the change log.
    <br>
    <br>
    There is one critical bug fix in this release, described in the change log as follows:
    <br>
    <br>
    "Fix a bug that prevents opening an Environment as read-only under certain circumstances, or causes queries in a read-only Environment to return out of date, and possibly transactionally incorrect, data. The LogFileNotFoundException may be thrown when the problem occurs. [#16368]"
    <br>
    <br>
    This is a fix to a bug that was introduced in JE 3.3.62. If you are currently using 3.3.62 and you are opening the Environment read-only or using the DbDump utility (which opens the Environment read-only), then we strongly recommend that you upgrade to JE 3.3.69.
    <br>
    <br>
    The corresponding Maven POM will be available in a few days.
    <br>
    <br>
    If you are using the JE wrapper plugin for your own Eclipse plugin development AND you open the Environment read-only, you should update that package through
    download.oracle.com/berkeley-db/eclipse. Other DPL Assistant users can consider the update to be optional.

    Morgan,
    Yes, we currently plan to only offer replication for Java 1.5. Our motivations are split between the speed consideration and the codeline issues. We've seen better performance with 1.5. Also taking full advantage of the type safety and concurrent support in 1.5 can end up affecting implementation choices significantly, and can make 1.4 code and 1.5 code diverge a lot.
    As for bug fixing on the 1.4 releases, we don't yet have an official plan. We care very much about supporting our open source users and have been able to provide backwards patches where critical in the past. However, the cost of backporting between 1.5 and 1.4 may be high for some bug fixes, and we'll probably have to decide case by case.
    Regards,
    Linda

  • Need help with Berkeley XML DB Performance

    We need help with maximizing performance of our use of Berkeley XML DB. I am filling most of the 29 part question as listed by Oracle's BDB team.
    Berkeley DB XML Performance Questionnaire
    1. Describe the Performance area that you are measuring? What is the
    current performance? What are your performance goals you hope to
    achieve?
    We are measuring the performance while loading a document during
    web application startup. It is currently taking 10-12 seconds when
    only one user is on the system. We are trying to do some testing to
    get the load time when several users are on the system.
    We would like the load time to be 5 seconds or less.
    2. What Berkeley DB XML Version? Any optional configuration flags
    specified? Are you running with any special patches? Please specify?
    dbxml 2.4.13. No special patches.
    3. What Berkeley DB Version? Any optional configuration flags
    specified? Are you running with any special patches? Please Specify.
    bdb 4.6.21. No special patches.
    4. Processor name, speed and chipset?
    Intel Xeon CPU 5150 2.66GHz
    5. Operating System and Version?
    Red Hat Enterprise Linux Relase 4 Update 6
    6. Disk Drive Type and speed?
    Don't have that information
    7. File System Type? (such as EXT2, NTFS, Reiser)
    EXT3
    8. Physical Memory Available?
    4GB
    9. Are you using Replication (HA) with Berkeley DB XML? If so, please
    describe the network you are using, and the number of Replica’s.
    No
    10. Are you using a Remote Filesystem (NFS) ? If so, for which
    Berkeley DB XML/DB files?
    No
    11. What type of mutexes do you have configured? Did you specify
    –with-mutex=? Specify what you find inn your config.log, search
    for db_cv_mutex?
    None. Did not specify -with-mutex during bdb compilation
    12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
    Which compiler and version?
    Java 1.5
    13. If you are using an Application Server or Web Server, please
    provide the name and version?
    Oracle Appication Server 10.1.3.4.0
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    Default.
    15. Please provide your Container Configuration Flags?
    final EnvironmentConfig envConf = new EnvironmentConfig();
    envConf.setAllowCreate(true); // If the environment does not
    // exist, create it.
    envConf.setInitializeCache(true); // Turn on the shared memory
    // region.
    envConf.setInitializeLocking(true); // Turn on the locking subsystem.
    envConf.setInitializeLogging(true); // Turn on the logging subsystem.
    envConf.setTransactional(true); // Turn on the transactional
    // subsystem.
    envConf.setLockDetectMode(LockDetectMode.MINWRITE);
    envConf.setThreaded(true);
    envConf.setErrorStream(System.err);
    envConf.setCacheSize(1024*1024*64);
    envConf.setMaxLockers(2000);
    envConf.setMaxLocks(2000);
    envConf.setMaxLockObjects(2000);
    envConf.setTxnMaxActive(200);
    envConf.setTxnWriteNoSync(true);
    envConf.setMaxMutexes(40000);
    16. How many XML Containers do you have? For each one please specify:
    One.
    1. The Container Configuration Flags
              XmlContainerConfig xmlContainerConfig = new XmlContainerConfig();
              xmlContainerConfig.setTransactional(true);
    xmlContainerConfig.setIndexNodes(true);
    xmlContainerConfig.setReadUncommitted(true);
    2. How many documents?
    Everytime the user logs in, the current xml document is loaded from
    a oracle database table and put it in the Berkeley XML DB.
    The documents get deleted from XML DB when the Oracle application
    server container is stopped.
    The number of documents should start with zero initially and it
    will grow with every login.
    3. What type (node or wholedoc)?
    Node
    4. Please indicate the minimum, maximum and average size of
    documents?
    The minimum is about 2MB and the maximum could 20MB. The average
    mostly about 5MB.
    5. Are you using document data? If so please describe how?
    We are using document data only to save changes made
    to the application data in a web application. The final save goes
    to the relational database. Berkeley XML DB is just used to store
    temporary data since going to the relational database for each change
    will cause severe performance issues.
    17. Please describe the shape of one of your typical documents? Please
    do this by sending us a skeleton XML document.
    Due to the sensitive nature of the data, I can provide XML schema instead.
    18. What is the rate of document insertion/update required or
    expected? Are you doing partial node updates (via XmlModify) or
    replacing the document?
    The document is inserted during user login. Any change made to the application
    data grid or other data components gets saved in Berkeley DB. We also have
    an automatic save every two minutes. The final save from the application
    gets saved in a relational database.
    19. What is the query rate required/expected?
    Users will not be entering data rapidly. There will be lot of think time
    before the users enter/modify data in the web application. This is a pilot
    project but when we go live with this application, we will expect 25 users
    at the same time.
    20. XQuery -- supply some sample queries
    1. Please provide the Query Plan
    2. Are you using DBXML_INDEX_NODES?
    Yes.
    3. Display the indices you have defined for the specific query.
         XmlIndexSpecification spec = container.getIndexSpecification();
         // ids
         spec.addIndex("", "id", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         spec.addIndex("", "idref", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // index to cover AttributeValue/Description
         spec.addIndex("", "Description", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_SUBSTRING, XmlValue.STRING);
         // cover AttributeValue/@value
         spec.addIndex("", "value", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // item attribute values
         spec.addIndex("", "type", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // default index
         spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // save the spec to the container
         XmlUpdateContext uc = xmlManager.createUpdateContext();
         container.setIndexSpecification(spec, uc);
    4. If this is a large query, please consider sending a smaller
    query (and query plan) that demonstrates the problem.
    21. Are you running with Transactions? If so please provide any
    transactions flags you specify with any API calls.
    Yes. READ_UNCOMMITED in some and READ_COMMITTED in other transactions.
    22. If your application is transactional, are your log files stored on
    the same disk as your containers/databases?
    Yes.
    23. Do you use AUTO_COMMIT?
         No.
    24. Please list any non-transactional operations performed?
    No.
    25. How many threads of control are running? How many threads in read
    only mode? How many threads are updating?
    We use Berkeley XML DB within the context of a struts web application.
    Each user logged into the web application will be running a bdb transactoin
    within the context of a struts action thread.
    26. Please include a paragraph describing the performance measurements
    you have made. Please specifically list any Berkeley DB operations
    where the performance is currently insufficient.
    We are clocking 10-12 seconds of loading a document from dbd when
    five users are on the system.
    getContainer().getDocument(documentName);
    27. What performance level do you hope to achieve?
    We would like to get less than 5 seconds when 25 users are on the system.
    28. Please send us the output of the following db_stat utility commands
    after your application has been running under "normal" load for some
    period of time:
    % db_stat -h database environment -c
    % db_stat -h database environment -l
    % db_stat -h database environment -m
    % db_stat -h database environment -r
    % db_stat -h database environment -t
    (These commands require the db_stat utility access a shared database
    environment. If your application has a private environment, please
    remove the DB_PRIVATE flag used when the environment is created, so
    you can obtain these measurements. If removing the DB_PRIVATE flag
    is not possible, let us know and we can discuss alternatives with
    you.)
    If your application has periods of "good" and "bad" performance,
    please run the above list of commands several times, during both
    good and bad periods, and additionally specify the -Z flags (so
    the output of each command isn't cumulative).
    When possible, please run basic system performance reporting tools
    during the time you are measuring the application's performance.
    For example, on UNIX systems, the vmstat and iostat utilities are
    good choices.
    Will give this information soon.
    29. Are there any other significant applications running on this
    system? Are you using Berkeley DB outside of Berkeley DB XML?
    Please describe the application?
    No to the first two questions.
    The web application is an online review of test questions. The users
    login and then review the items one by one. The relational database
    holds the data in xml. During application load, the application
    retrieves the xml and then saves it to bdb. While the user
    is making changes to the data in the application, it writes those
    changes to bdb. Finally when the user hits the SAVE button, the data
    gets saved to the relational database. We also have an automatic save
    every two minues, which saves bdb xml data and saves it to relational
    database.
    Thanks,
    Madhav
    [email protected]

    Could it be that you simply do not have set up indexes to support your query? If so, you could do some basic testing using the dbxml shell:
    milu@colinux:~/xpg > dbxml -h ~/dbenv
    Joined existing environment
    dbxml> setverbose 7 2
    dbxml> open tv.dbxml
    dbxml> listIndexes
    dbxml> query     { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }
    dbxml> queryplan { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }Verbosity will make the engine display some (rather cryptic) information on index usage. I can't remember where the output is explained; my feeling is that "V(...)" means the index is being used (which is good), but that observation may not be accurate. Note that some details in the setVerbose command could differ, as I'm using 2.4.16 while you're using 2.4.13.
    Also, take a look at the query plan. You can post it here and some people will be able to diagnose it.
    Michael Ludwig

  • Query container only relative to the Berkeley DB XML Enviroment location.

    I have been having a great deal of trouble getting any queries to work using the Berkeley DB XML API. I am using Java 7 as my language and Netbeans 7.0.1 as my platform. After lots of experimentation with the Shell and observing its behavior, I seem to have isolated it to my not being able to specify a full absolute path name for my containers. Apparently to use XmlManager.query() or XmlManager.prepare().execute(), I must root the query with either a collection() or doc() entry at the beginning. The problem seems to be that both collection() and doc() only will take a simple filename+extension form for the name of my container. The XmlManager then resolves this name relative to the location of my XmlManager environment. If I specify anything else, I get "Error 6: Invalid URI format [err:FODC0002]" as an XmlException. I have tried various formats of absolute paths for my containers and none of them will work. For my application, the user needs to be able to put his containers anywhere on a local drive. It appears that the only way that I can operate is to put all my containers in that one Environment directory, or possibly one below it. This is a super-serious problem for me if this is true.
    I could find no call in the API to override this behavior. I had hoped that XmlQueryContext.setBaseURI() would do that since it said it was for specifying a URI against which local things would be resolved against. But, any call to it with a directory path also raised the same XmlExpection. Being new with this produce, I am hoping that I am missing something obvious. Can anyone help?
    Edward Fairchild
    [email protected]

    I have more information about this problem. It appears to be caused by spaces in the filename or path of the container in forming the collection() prefix in my xQuery. For the following test, I was using a container named "EEF-PGL10GenAncestors Only.gcdb" in my Environment folder. Here is the code that fails.
    XmlQueryContext context = xmlmanGCDB.createQueryContext();
    context.setEvaluationType(XmlQueryContext.Lazy);
    String sContainerName = xmlcontGCDB.getName();
    FileObject foContainer = FileUtil.toFileObject(new File(sContainerName));
    sContainerName = foContainer.getNameExt();
    sContainerName = Util.convertFilepathToURI(sContainerName);
    String sQuery = "collection(\"" + sContainerName + "\")/" + GCDBTAG_INDI;
    if(bDebug) DebugOut.println("sQuery: " + sQuery);
    XmlQueryExpression xmlquery = xmlmanGCDB.prepare(sQuery, context);
    XmlResults xmlresults = xmlquery.execute(context);
    XmlDocument xmldoc = xmlmanGCDB.createDocument();
    boolean bRet = xmlresults.next(xmldoc);
    1. When I run it with the container named above with space and with the Util.convertFilepathToURI() call commented out, I get an XmlException executing the XmlManager.prepare() call. Based on this, I thought that you must be requiring percent encoding of spaces. The exception had Error Code 6: Invalid URI format [err:FODC0002], errcode = QUERY_PARSER_ERROR. The debug line displayed as
    sQuery: collection("EEF-PGL10GenAncestors Only.gcdb")/INDI
    2. So, if you uncomment that Util.convertFilepathToURI() call line and run it again, the debug line displays
    sQuery: collection("EEF-PGL10GenAncestors%20Only.gcdb")/INDI
    but that also caused an XmlException with Error Code 17: EEF-PGL10GenAncestors%20Only.gcdb: container file not found, or not a container, errcode = CONTAINER_NOT_FOUND. So, it is not interpreting the space as a %20 correctly.
    3. Finally, if I take the space out of the filename and rerun, the debug line displays
    sQuery: collection("EEF-PGL10GenAncestorsOnly.gcdb")/INDI
    and everything works just fine. Are you using some other way to encode this space in a container name besides what I have tried? I need some help here folks.
    Edward Fairchild
    [email protected]

  • New whitepaper - Performing Queries in JE

    All,
    Chao Huang, a member of the JE team, has written a new whitepaper, new and updated as of July 2009, called Performing Queries in Oracle Berkeley DB Java Edition. The whitepaper takes common SQL queries and shows how to execute the same logic using the Direct Persistence Layer (DPL). The goal is to give users who are familiar with SQL some help in learning how to use the DPL. We hope it's useful!
    The JE team

    All,
    Chao Huang, a member of the JE team, has written a new whitepaper, new and updated as of July 2009, called Performing Queries in Oracle Berkeley DB Java Edition. The whitepaper takes common SQL queries and shows how to execute the same logic using the Direct Persistence Layer (DPL). The goal is to give users who are familiar with SQL some help in learning how to use the DPL. We hope it's useful!
    The JE team

  • Could I have CASE or IF statement in FMS queries?

    Is it possible to have case statements in FMS queries:
    For example:
    SELECT T0.[U_DepoistfeeON] case
          when T0.[U_DepoistfeeON] is NOT BLANK  then $[$38.111.160]='Deposit Fee'
          when T0.[U_DepoistfeeON] is BLANK  then  then $[$38.111.160]=BLANK
    end FROM OITM T0
    What is wrong with above query please? Thank you very much.
    I do not mind even if above query is doable with an IF statement in it.

    Hi Rahul, this is what I want -
    I have a user defined field attached to item master OITM. The field is called : U_DepoistfeeON
    The above field contains additional depositFee taxes for selling beer bottles.
    I have also created a new tax as part of Freight handling. While on Sales Order screen, the Freight (unhide first through forms field) drop down can have the new tax type selected automatically "Deposit Fee".
    Thus, if the line item is of beer type that has U_DepositfeeON, then Freight field should automaticlally pick the type "Deposit Fee".
    If U_DepositfeeON is zero then I would like the Freight field on Sales Order screen ($38.111.160) set to blank.
    I tried to achieve this through following using case statement.
    select T0.U_DepoistfeeON
    from oitm t0
    case
    when T0.U_DepoistfeeON 0
    then $http://$38.111.160='Deposit Fee'
    when T0.U_DepoistfeeON = 0
    then $http://$38.111.160=''
    else
    $http://$38.111.160=''
    end;
    Of course it doesn't work. Note: I know I typed in DepoistfeeON, the error is not due to that.
    Thanks.

  • How to Dene a Data Link Between Queries: Bind Variables

    This is an interesting topic and I cannot get it to work using Bind Variables.
    I have 2 queries: Q1 and Q2. Q2 needs c_id, account_code and account_type from Q1.
    Whe I run the data template below, I get only the data for Q1.
    Now people may argue that there is no data in Q2 for the relevant clause. So if I even remove the where clause in Q2 I still get no joy i.e Data appears for Q1 but not for Q2
    <dataTemplate name="FLCMR519_DATA_SET" description="Termination Quote Report">
         <parameters>
              <parameter name="cid" dataType="number" defaultValue="1"/>
              <parameter name="p_cln_id" dataType="number" defaultValue="62412"/>
         </parameters>
         <dataQuery>
              <sqlStatement name="Q1">
                   <![CDATA[SELECT qm.qmd_id,
    qm.contract_period,
    qm.quo_quo_id||'/'||qm.quote_no||'/'||qm.revision_no reference_no,
    qm.contract_distance,
    qm.mdl_mdl_id,
    q.qpr_qpr_id,
    q.quo_id,
    q.drv_drv_id,
    qm.revision_user username,
    pb.first_name||' '||pb.last_name op_name,
    pb.telephone_no,
    pb.facsimile_no,
    pb.email,
    q.c_id c_id,
    q.account_type account_type,
    q.account_code account_code,
    m.model_desc,
    ph.payment_description payment_head_desc,
    cl.fms_fms_id,
    cl.start_date,
    cl.end_date,
    cl.actual_end_date,
    cl.con_con_id,
    cl.cln_id,
    cl.term_qmd_id term_qmd_id,
    qm2.contract_period term_period,
    qm2.contract_distance term_distance
    FROM quotations q,
               quotation_models qm,
               contract_lines cl,
               personnel_base pb,
               models m,
               model_types mt,
               payment_headers ph,
               quotation_models qm2
    WHERE q.quo_id = qm.quo_quo_id
           AND cl.cln_id = :p_cln_id
           AND qm.qmd_id = cl.qmd_qmd_id
           AND qm2.revision_user = pb.employee_no (+)
           AND qm.mdl_mdl_id = m.mdl_id
           AND m.mtp_mtp_id = mt.mtp_id
           AND qm.payment_id = ph.payment_header_id
           AND qm2.qmd_id (+) = cl.term_qmd_id
    ]]>
              </sqlStatement>
              <sqlStatement name="Q2">
                   <![CDATA[SELECT ea.c_id,                  ea.account_type,ea.account_code,ea.account_name
    FROM external_accounts ea
                 WHERE ea.c_id = :c_id
                   AND ea.account_type = :account_type
                   AND ea.account_code = :account_code
    ]]>
              </sqlStatement>
         </dataQuery>
    </dataTemplate>

    Defining dataStructure section is mandatory for multiple queries.

  • How to use one WAD template for all the available queries

    Hi,
    I have created a WAD template... Now we have close to 25 queries which will use that template for displaying their output at Portal...
    Is there any way that i don`t have to create multiple copies of my WAD template...
    I.e. all queries would call that same template through portal...

    Hi,
    The Bex report uses the Standard Template 0ANALYSIS_PATTERN while it is executed.
    So if your sure that the look and feel of all the Reports is going to remain same, you don't need to even attach all these Queries to Templates in WAD.
    When you execute the BEx query, it would pick up the standard template and run it in portal.
    But if you want to customize the Report Template:
    1) You can change this standard template 0ANALYSIS_PATTERN  and customize it according to ur requrement in WAD.
    2) Or you can create a copy of this standard template and change it as required. Then change the template in spro (transaction RSCUSTV27) to point to this Z template.
    Let me know if you need more details.
    Regards,
    Forum

  • Can multiple threads share the same cursor in berkeley db java edition?

    We use berkeley db to store our path computation results. We now have two threads which need to retrieve records from database. Specifically, the first thread accesses the database from the very beginning and read a certain number of records. Then, the second thread needs to access the database and read the rest records starting from the position where the cursor stops in the first thread. But, now, I cannot let these two threads share the same cursor. So, I have to open the database separately in two threads and use individual cursor for each thread. This means I have to in the second thread let the cursor skip the first certain number of records and then read the rest records. However, in this way, it is a waste of time letting the second thread skip a certain of records. It will be ideal for us that the second thread can start reading the record just from the place where the first thread stops. Actually, I have tried using transactional cursor and wanted to let the two threads share the same transactional cursor. But it seems that this didn't work.
    Can anyone give any suggestion? Thank you so much!
    sgao

    If your question is really about using the BDB Java Edition product please post to the JE forum:
    Berkeley DB Java Edition
    If your question is about using the Java API of the BDB (C-based) product, then this is the correct forum.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Error while Activating / running Bex queries/reports

    Hello
    I have following ERRORs while activating / running BEx reports/queries.
    1. While Activating BEx reports, I keel getting following ERRORS
    Object directory entry R3TR ELEM E341OIR6G715GUMGN0MRXP3MA does not exist
    Error when activating element BX7FOOKQB4588RROCBTC2YVZN
    BEx transport request 'BIDK901352' is not available or not suitable
    2. While running the BEx report/query in Bex Analyzer, I'm getting ERROR in entering the parameters, its NOT allowing / giving me option to select the parameters, for example for 0COSTCENTER, it is NOT displaying list costcenters to select before I can RUN that particular report.
    3. How can I trouble shoot ERRORs related to BEx and Portal connectivity.
    4. How would I setup the Variants for a Bex query/report ? Are these Variants for per user basis / per report basis. Can I have each variant for particular users OR all users must select their own variant each time when they RUN the report ? 
    Thanks, Sorry I have asked multiple questions, in jst one thread.
    BI

    Goto SE03 -->Search for Objects in Requests/Tasks  option , select the object type and execute it will give all the transports it has collected before ..
    Also you can use transport connection in RSA1 and select the query and find out all the objects in query locked by what request etc ..
    Thanks,
    Ravi

  • Error while transporting Queries

    Hi,
    I am getting this error while transporting queries!!
    Object '!ZTIC_UKA' (ELEM) of type 'Query' is not available in version 'A'
    Message no. RSO252
    Diagnosis
    You wanted to generate an object with the name '!ZTIC_UKA' (in transport request ELEM) of type 'Query' (TLOGO). This is, however, not available in the BW Repository database. It does not exist in the requested version A. If the version is 'D' then it is possible that an error arose during the delivery or installation. If the version is 'A' then the Object was either not created or not activated.
    System Response
    The object was not taken into account in the next stage of processing
    Thanks

    Hi Murali,
    YOu are trying to transport the element ZTIC_UKA'  of the query,may be an variable  which is not colelcted in the request.
    Try to transport the query through RSA1> Transport connection->Query-->give you query name -->Drag and Drop to Right hand side  and Click on collect all dependent objects and check whetehr they have collected in the same request.Make sure ur variables are collected.
    Tranport it now and Check.
    Rgds
    SVU123
    Edited by: svu123 on Mar 4, 2010 7:41 AM

  • Need procedure for creation of BW Roles, Assigning Queries,Publishing Roles

    Hi Experts,
      Could you please let me know the procedure for creation of BW Roles, Assigning Queries,Publishing Roles in Business Explorer (BEx - BW 3.5)
    Thanks in advance,
    Andy

    Hi,
    Creating BW Roles
    http://help.sap.com/saphelp_nw04/helpdata/en/52/6714b6439b11d1896f0000e8322d00/frameset.htm
    Assigning Queries
    After creating the query, save the query to a role from the query designer.
    Publishing Roles in Business Explorer
    https://websmp101.sap-ag.de/~sapdownload/011000358700002894802003E/HowToBIPortal1.pdf
    Hope this helps you..!
    -Pradnya

  • Problem using DECODE() function with a Query of Queries

    I
    posted
    on my blog about an issue I was having trying to use the PL/SQL
    DECODE() function with a Coldfusion Query of Queries. This function
    works fine when you query a database for information. However, when
    you query another query, it seems that CF doesn't recognize it. I
    got errors stating that it found a left parenthesis where it
    expected a FROM key word. Here is a simplified version of what I am
    trying to do:
    quote:
    <!--- Simulated query; similar to what I was calling from
    my database --->
    <cfscript>
    qOriginal = queryNew("Name,Email,CountryCode",
    "VarChar,VarChar,VarChar");
    newRow = queryAddRow(qOriginal, 5);
    querySetCell(qOriginal, "Name", "Joe", 1);
    querySetCell(qOriginal, "Email", "[email protected]", 1);
    querySetCell(qOriginal, "CountryCode", "AMER", 1);
    querySetCell(qOriginal, "Name", "Sally", 2);
    querySetCell(qOriginal, "Email", "[email protected]", 2);
    querySetCell(qOriginal, "CountryCode", "AMER", 2);
    querySetCell(qOriginal, "Name", "Bob", 3);
    querySetCell(qOriginal, "Email", "[email protected]", 3);
    querySetCell(qOriginal, "CountryCode", "ASIA", 3);
    querySetCell(qOriginal, "Name", "Mary", 4);
    querySetCell(qOriginal, "Email", "[email protected]", 4);
    querySetCell(qOriginal, "CountryCode", "EURO", 4);
    querySetCell(qOriginal, "Name", "John", 5);
    querySetCell(qOriginal, "Email", "[email protected]", 5);
    querySetCell(qOriginal, "CountryCode", "EURO", 5);
    </cfscript>
    <cfquery name="qCountries" dbtype="query">
    SELECT DISTINCT(CountryCode) AS CountryCode,
    DECODE(states, "AMER", "North America &amp; Canada",
    "EURO", "Europe &amp; Africa", "ASIA", "Japan &amp;
    Asia","") CountryName
    FROM qOriginal
    ORDER BY CountryCode
    </cfquery>
    <cfdump var="#qCountries#">
    <!--- ========== END OF CODE ========== --->
    So running this returned the following error:
    Query Of Queries syntax error.
    Encountered "(. Incorrect Select Statement, Expecting a
    'FROM', but encountered '(' instead, A select statement should have
    a 'FROM' construct.
    Does anybody know why this doesn't work? Is it just not
    supported? Please note that I have also tried to use the CASE()
    function instead of DECODE() and that resulted in basically the
    same error. For now I an looping over my distinct query with a
    switch statement and manually loading a new query with the data how
    I want it. But it would be a lot cleaner and less code to have the
    DECODE() to work. Thx!

    DECODE() is an Oracle function, not generic SQL. Q-of-Q is a
    very limited subset of SQL and lacks many functions and clauses
    available in standard SQL, especially what you may be used to using
    in your particular RDBMS.
    See
    Query
    of Queries user guide
    Phil

Maybe you are looking for

  • Submit PDF form data to URL without submit button

    Our client use LiveCycle Designer to design PDF forms. We've an ASP.Net Web form application, where they want us to embed the PDF form as part of our Data Entry Form. The Entry form has a Save button. And we want to save all the entry form entries (i

  • If I restore my macbook, will it go back to it's original version?

    My early 2011 MacBook is having issues that I think will be fixed by restoring it back to when I first bought it. I have a couple questions before I commit to restoring: 1). Since buying my MacBook I have paid to upgrade to the next version, I'm curr

  • Data is dropped when I receive a phone call - Lumi...

    Is it normal to get disconnected from mobile internet while you're on a call?  I was on 3G using Lumia 920 at the time it happened.  I was talking to someone and needed to look something up in the broswer but was surprised I could not.  Is there a wa

  • Name of PRESETS file in LR2

    Does anybody know the NAME and the LOCATION of the USER PRESETS file in LR2? Thanks, Oliver

  • 2 simultanious homepages with gmail no longer loading properly.

    Until just recently I was able to have 2 gmail pages/tabs open simultaneously as part of my home pages. One was my regular "[email protected]" account while the other was "[email protected]". I had this setup as my homepage and when firefox opened it