Kodo & non relational data storage

I noticed you are working on LDAP adapter for Kodo. That's nice
I was wondering why didn't you try to have support for Oracle Objects. It
should be relatively easy to map JDO onto oracle objects. JDQL might be
tricky but mapping itself should be pretty straightforward
I think the primary reason people do not use oracle objects much because
oracle did not provide any reasonable client binding mechanism. Their
JPublisher is pretty useless but given JDO frontent it will be nice
Alex

Alex,
Kodo supports Versant object database as persistent storage in
Kodo/Versant JDO version.
Pinaki
Alex Roytman wrote:
I noticed you are working on LDAP adapter for Kodo. That's nice
I was wondering why didn't you try to have support for Oracle Objects. It
should be relatively easy to map JDO onto oracle objects. JDQL might be
tricky but mapping itself should be pretty straightforward
I think the primary reason people do not use oracle objects much because
oracle did not provide any reasonable client binding mechanism. Their
JPublisher is pretty useless but given JDO frontent it will be nice
Alex

Similar Messages

  • Non-server data storage

    A friend of mine is developing a database for a specific environment:
    A small number of peer-to-peer networked Windows computers in a small office, with no dedicated servers - each computer acting as an individual's workstation. They want a "central" database to store client information etc, with an interface they can all use at once to access and change the data.
    Given the nature of the environment, my first thought was that some sort of file-based data storage (not requiring a server process) would be most appropriate - Access, csv, XML....but I'm not intricately familiar with JDBC support for these mechanisms, so wasn't sure what to recommend specifically.
    They are not willing/able to spend any money on this solution, so it must use the current environment. Can someone recommend a data storage method and point me to an appropriate JDBC driver?
    Oh, and while I have your attention - anybody know of a good CSV parser? I'm currently splitting a line of data by commas, but it's also splitting strings with commas in them...

    A friend of mine is developing a database for a
    specific environment:
    A small number of peer-to-peer networked Windows
    computers in a small office, with no dedicated servers
    - each computer acting as an individual's workstation.
    They want a "central" database to store client
    information etc, with an interface they can all use at
    once to access and change the data.
    based on this i have a non-Java solution to suggest.
    use MS-Access and IIS to develop and deploy an intranet.
    there are several pros to this solution that i see
    - You can develop and deploy an intranet using web browsers as clients very quickly and relatively cheaply
    - an intranet (series of web pages) vs. a full blown application may well be easier to make changes to
    - having the data in a RDBMS (Access) which will be cost effective in this case also will make it relatively simple to upgrade port the system later
    now i like programming in java as much as the next person but from your requirements it sounds like writing an application might be overkill. in my experience doing an intranet like this is a pretty good solution.. you don't have to install anything on the clients... you already have the software for the server (if you don't have IIS most versions of Windows now have PWS or Personal Web Server which will work for this)... the important thing is to have a good database design so that you can make changes or port the client easily later if you need to.

  • Data Storage in Essbase -- Non Numeric

    Hi Experts,
    Could you please guide me how the non numeric data is stored in Essbase (which is not metadata). Ex: Dates, Text Etc...
    It it Essbase first convert date format into numeric and then store or something else???
    Thanks
    N Kumar

    From version 11, Essbase provides some support for text and date measures natively.
    Text measures allow a predefined set of text values to be mapped to a predefined set of numeric values - there is no native support for loading 'free text'. The text measure 'mappings' are stored in the Essbase outline (.otl), but for both text and date cases the value itself is stored numerically, like any other data cell.
    You can also use formats to convert numeric values to text when reporting. Again, Essbase 'really' only stores data as numbers.
    Check out the following page from the latest Database Administrators Guide, which covers all these topics: http://download.oracle.com/docs/cd/E17236_01/epm.1112/esb_dbag/frameset.htm?dtypmeas.html
    Edited by: TimG on Jul 8, 2010 5:30 AM
    John got there before me. Still, follow the link to the DBAG!

  • Uploading data related to storage bin

    Hi,
    what are the tables to upload with LSMW the field Maximum weight in the screen of trx LS02N?
    Best regards

    Hi,
    try with trx LSMW with a Batch Input Recording if you want to modify your storage bins data.
    If you want to create them you can use as I told before the LSMW trx
    with the option Standard Batch/Direct Input, you can also do that from SPRO, Warehouse Management, Master Data, Storage Bins, Define Storage Bin Structure
    Best regards

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • TREX - Configuring Distributed Slave with Decentralized Data Storage

    I am creating a distributed TREX environment with decentralized data storage with 3 hosts.  The environment is running TREX 7.10 Rev 14 on Windows 2003 x64.  These are the hosts:
    Server 01p: 1st Master NameServer, Master Index Server, Master Queue Server
    Server 02p: 2nd Master NameServer, Slave Index Server
    Server 03p: Slave NameServer, Slave Index Server (GOAL; Not there yet)
    The first and second hosts are properly set up, with the first host creating the index and replicating the snapshot to the slave index server for searching.  The third host is added to the landscape.  When I attempt to change the role of the third host to be a slave for the Master IS and run a check on the landscape, I receive the following errors:
    check...
    wsaphptd03p: file error on 'wsaphptd03p:e:\usr\sap\HPT\TRX00\_test_file_wsaphptd02p_: The system cannot find the file specified'
    wsaphptd02p: file error on 'wsaphptd02p:e:\usr\sap\HPT\TRX00\_test_file_wsaphptd03p_: The system cannot find the file specified'
    slaves: select 'Use Central Storage' for shared slaves on central storage or change base path to non shared location
    The installs were all performed in the same with, with storage on the "E:" drive using a local install on the stand-alone installation as described in the TREX71InstallMultipleHosts and TREX71INstallSingleHosts guides provided.
    Does anybody know what I should try to do to resolve this issue to add the third host to my TREX distributed landscape?  There really weren't any documents that gave more information besides the install documents.
    Thanks for any help.

    A ticket was opened with SAP customer support.  The response to that ticket is below:
    Many thanks for the connection. We found out, that the error message is wrong. It can be ignored, if you press 'Shift' and button 'Deploy' (TREXAdmin tool -> Landscape Configuration).  We will fix this error in the next Revision (Revision 25) for TREX 7.1.

  • Uploading to online data storage: Now cannot download songs into iTunes

    Hello all.
    I am not seeing this one on any FAQs, knowledge bases, or discussion boards yet:
    I am doing my initial upload of files to SugarSync, a highly-recommended online data storage service. Since I started, I cannot download songs into iTunes, even from the iTunes Store.
    On any new downloads from the iTunes Store, the song DOES appear in the Music library view but immediately after completing the download it shows the exclamation point warning that the file is not in its location. The new folders get created properly within the iTunes Music folder but the subfolder where the song should be is empty. Each time I had to write iTunes Support to get them to make the files available.
    I can still add files manually, no problem. E.g., I can add *.mp3 or *.wav files or folders, I can convert them to AAC, etc. The glitch seems to occur only when automated loads into iTunes are operating.
    As a test, I downloaded a song from Amazon. Here the problem was different but I could work around it manually. Normally the Amazon process loads songs automatically into iTunes, too. Here again, the download did create the proper folder in the iTunes Music folder, as it should, but this time the symptoms were reversed:
    (a) the mp3 file WAS in its folder as it should be (w/the iTunes DL, the file was NOT there)
    (b) the song did NOT appear in the iTunes Music view (w/the iTunes DL, the song DID appear)
    (c) I was able to browse to the file and tell it manually to load into iTunes (w/iTunes I had to write Support and wait a day).
    (I wonder what's the cause of the differences between the two cases.)
    Strictly speaking, I can't PROVE the problem has anything to do with SugarSync (which otherwise seems good so far), but the DL problem started as soon as I started using it. Something in the SugarSync upload or file-monitoring process, or an odd thin gin iTunes, seems to be preventing automated, direct loads into iTunes. And since the data service runs in background, so it can monitor file changes, that might mean I can't buy music anymore! Obviously that would be a dealbreaker with SS. (I have contacted SS on this but they've not had fair chance to reply yet.)
    1) Anyone else have this problem?
    2) Is this permanent or just temporary while I am doing the initial upload?
    3) Anyone know a solution?
    (FYI, I am a highly-experienced user and otherwise quite handy with iTunes files, library moves, and backups. My library is entirely consolidated and all AAC.)
    Thanks.
    (Oh, and this occurred in both iTunes 8 and the new iTunes 9, so it seems unrelated to the upgrade this week.)

    UPDATE 1. CHANGING BROWSER HELPED -- OR DID IT?
    I called Apple iTunes Support, who said the problem is new to them. The technician's hypothesis was that something, perhaps browser-related, was interfering with the initial creation of a temporary file (which should go to the Recycle Bin) that instead causes the completed file to go to there.
    He noted that iTunes, though not going through one's browser onscreen, does use settings within one's default browser. I use Mozilla Firefox, so we switched to IE as the default browser, restarted iTunes, and the song downloaded with no problem! Then I switched back to Mozilla, restarted iTunes, and it worked AGAIN with no problem!
    (Dutifully I advised SugarSync, which is still investigating.)
    UPDATE 2: ARTIST NAME CHANGE - SOME FILES GOT MIS-MOVED / MIS-CHANGED
    Definitely something still wrong. This time some pre-existing song entries (not new downloads) lost their connection to their source file.
    In iTunes, which manages folder names for artists and albums automatically, I corrected the spelling for an Artist, so immediately iTunes renamed the folder, and automatically SugarSync noted the change to be uploaded. While the changed folder name and all the songs within were still uploading to SS, in iTunes I saw exclamation points come up -- but only for some of them. Most files got moved or changed correctly, but several lost connection to their file (i.e., the file was removed for the original misnamed folder but never moved into the correctly-named folder). Weird.
    Worse, in only some of those cases did I find the missing *.m4a file in the Recycle bin. (I had to retrieve old, original *.mp3 versions from another folder and re-import each into iTunes manually.) I've never seen iTunes have a problem managing an Artist rename until I started using the live SS process.
    (I've reported this to SS and asked if there is a way to disable temporarily SS to see if that's the problem.)
    [Note: I am willing to try downloads again but I am wary of trying to rename entire Artists (Folder) again. That was a lot of work.]
    ====
    UPDATE 3: SERIES OF TESTS - 1 FAILURE USING iTUNES
    Still problem occurs, but not always. Today, I rebooted PC. I tried CD, iTunes, & Amazon. I varied having the browser open when using iTunes.
    Here are the results of a series of attempts to download songs. "FAIL" means the file did not load properly into iTunes or loaded but lost its connection (exclamation point warning).
    # Source Mozilla #Songs Result
    1 CD Closed 1 OK
    2 iTunes Closed 1 OK
    3 Amazon Open 2 OK
    4 iTunes Open 1 FAIL
    5 Amazon Open 1 OK
    6 iTunes Closed 1 OK
    7 Amazon Open 2 OK
    8 iTunes Open 2 OK
    (I reported this to SS. Hoping they'll test and find the problem.)

  • 2.23 Apps must follow the iOS Data Storage Guidelines or they will be rejected

    My Multi Issue v14 App (24124) was just rejected by Apple. Apparently because storage of the data (folios?) was not iCloud compatible.
    Is this related to v14? Would building a v15 app resolve the issue?
    or is there some other problem?
    Please advise...
    Full text of Apple rejection below...
    Nov 4, 2011 08:17 PM. From Apple.
    2.23
    We found that your app does not follow the iOS Data Storage Guidelines, which is not in compliance with the App Store Review Guidelines.
    In particular, we found magazine downloads are not cached appropriately.
    The iOS Data Store Guidelines specify:
    "1. Only documents and other data that is user-generated, or that cannot otherwise be recreated by your application, should be stored in the /Documents directory and will be automatically backed up by iCloud.
    2. Data that can be downloaded again or regenerated should be stored in the /Library/Caches directory. Examples of files you should put in the Caches directory include database cache files and downloadable content, such as that used by magazine, newspaper, and map applications.
    3. Data that is used only temporarily should be stored in the /tmp directory. Although these files are not backed up to iCloud, remember to delete those files when you are done with them so that they do not continue to consume space on the user’s device."
    For example, only content that the user creates using your app, e.g., documents, new files, edits, etc., may be stored in the/Documents directory - and backed up by iCloud. Other content that the user may use within the app cannot be stored in this directory; such content, e.g., preference files, database files, plists, etc., must be stored in the /Library/Caches directory.
    Temporary files used by your app should only be stored in the /tmp directory; please remember to delete the files stored in this location when the user exits the app.
    It would be appropriate to revise your app so that you store data as specified in the iOS Data Storage Guidelines.
    For discrete code-level questions, you may wish to consult with Apple Developer Technical Support. Please be sure to include any symbolicated crash logs, screenshots, or steps to reproduce the issues when you submit your request. For information on how to symbolicate and read a crash log, please see Tech Note TN2151 Understanding and Analyzing iPhone OS Application Crash Reports.
    To appeal this review, please submit a request to the App Review Board.

    You might want to check out our ANE (Adobe Native Extension) solution that enables your FB projects to abide by the Apple's Data Storage guidelines.
    https://developer.apple.com/library/ios/#qa/qa1719/_index.html
    Do Not Backup project:
    http://www.jampot.ie/ane/ane-ios-data-storage-set-donotbackup-attribute-for-ios5-native-ex tension/
    David
    JamPot.ie

  • I am having issues related to storage and I believe this is causing my computer to slow down. "Other" files part is the major occupier(180 GB). I have done Omni disk and multiple other cleaning(iTunes-device, restart, etc), yet have not been able to empty

    I am having issues related to storage and I believe this is causing my computer to slow down. "Other" files part is the major occupier(180 GB). I have done Omni disk and multiple other cleaning(iTunes-device, restart, etc), yet have not been able to empty any more space, nor to speed up my computer? Any suggestions? All your contributions are welcomed. Thanks. Mehmet Mazhar Celikoyar

    Below is the result:
    Hardware Information:
              MacBook Pro (15-inch, Mid 2009)
              MacBook Pro - model: MacBookPro5,3
              1 3.06 GHz Intel Core 2 Duo CPU: 2 cores
              4 GB RAM
    Video Information:
              NVIDIA GeForce 9400M - VRAM: 256 MB
              NVIDIA GeForce 9600M GT - VRAM: 512 MB
    Audio Plug-ins:
              BluetoothAudioPlugIn: Version: 1.0
              AirPlay: Version: 1.9
              AppleAVBAudio: Version: 2.0.0
              iSightAudio: Version: 7.7.3
    Startup Items:
              HP IO - Path: /Library/StartupItems/HP IO
    System Software:
              OS X 10.9 (13A603) - Uptime: 3 days 22:8:6
    Disk Information:
              ST9500420ASG disk0 : (500.11 GB)
                        EFI (disk0s1) <not mounted>: 209.7 MB
                        Macintosh HD (disk0s2) /: 499.25 GB (220.49 GB free)
                        Recovery HD (disk0s3) <not mounted>: 650 MB
              HL-DT-ST DVDRW  GS23N 
    USB Information:
              Apple Inc. Built-in iSight
              Apple Internal Memory Card Reader
              Apple Inc. Apple Internal Keyboard / Trackpad
              Apple Computer, Inc. IR Receiver
              Apple Inc. BRCM2046 Hub
                        Apple Inc. Bluetooth USB Host Controller
    FireWire Information:
    Thunderbolt Information:
    Kernel Extensions:
              com.rim.driver.BlackBerryUSBDriverInt          (0.0.64)
              com.livedrive.filesystems.livedrivefs          (2.1.14)
    Problem System Launch Daemons:
    Problem System Launch Agents:
    Launch Daemons:
              [loaded] com.adobe.fpsaud.plist
              [loaded] com.adobe.versioncueCS4.plist
              [loaded] com.creativebe.MainMenuHelper.plist
              [loaded] com.macpaw.CleanMyMac2.Agent.plist
              [loaded] com.magican.castle.plist
              [loaded] com.microsoft.office.licensing.helper.plist
              [loaded] com.rim.BBDaemon.plist
              [failed] com.zeobit.MacKeeper.plugin.AntiTheft.daemon.plist
    Launch Agents:
              [loaded] com.adobe.CS4ServiceManager.plist
              [loaded] com.hp.messagecenter.launcher.plist
              [loaded] com.hp.productresearch.plist
              [loaded] com.rim.BBLaunchAgent.plist
    User Launch Agents:
              [loaded] com.adobe.ARM.[...].plist
              [failed] com.macpaw.CleanMyMac2Helper.diskSpaceWatcher.plist
              [failed] com.macpaw.CleanMyMac2Helper.scheduledScan.plist
              [failed] com.macpaw.CleanMyMac2Helper.trashWatcher.plist
              [failed] com.UninstallerTool.plist
              [failed] com.VolumeWatcherTool.plist
              [failed] com.zeobit.MacKeeper.Helper.plist
    User Login Items:
              BlackBerry Device Manager
              HP Scheduler
    3rd Party Preference Panes:
              Adobe Version Cue CS4
              DC30 Xact Driver Panel
              Flash Player
              Flip4Mac WMV
              Perian
    Internet Plug-ins:
              AdobePDFViewer.plugin
              AdobePDFViewerNPAPI.plugin
              Default Browser.plugin
              Flash Player.plugin
              FlashPlayer-10.6.plugin
              Flip4Mac WMV Plugin.plugin
              iPhotoPhotocast.plugin
              JavaAppletPlugin.plugin
              OfficeLiveBrowserPlugin.plugin
              QuickTime Plugin.plugin
              SharePointBrowserPlugin.plugin
              Silverlight.plugin
    User Internet Plug-ins:
              OctoshapeWeb.plugin
    Bad Fonts:
              None
    Time Machine:
              Mobile backups: OFF
              Auto backup: NO
              Volumes being backed up:
                        Macintosh HD: Disk size: 499.25 GB Disk used: 278.75 GB
              Destinations:
                        TOSHIBA EXT [Local] (Last used)
                        Total size: 2 TB
                        Total number of backups: 5
                        Oldest backup: 2013-10-24 23:21:31 +0000
                        Last backup: 2013-10-25 02:59:08 +0000
                        Size of backup disk: Excellent
                                  Backup size 2 TB > (Disk size 499.25 GB X 3)
    Top Processes by CPU:
                   3%          WindowServer
                   1%          EtreCheck
                   1%          Microsoft PowerPoint
                   0%          BBLaunchAgent
                   0%          fontd
                   0%          aosnotifyd
    Top Processes by Memory:
              168 MB             Microsoft PowerPoint
              123 MB             Safari
              86 MB              Mail
              74 MB              WindowServer
              45 MB              com.apple.WebKit.Networking
              45 MB              com.apple.WebKit.WebContent
              41 MB              Finder
              41 MB              PluginProcess
              41 MB              mds_stores
              33 MB              Notes
    Virtual Memory Statistics:
              72 MB              Free RAM
              1.27 GB            Active RAM
              1.24 GB            Inactive RAM
              667 MB             Wired RAM
              2.58 GB            Page-ins
              111 MB             Page-outs

  • Treo 755p The free data storage space on the device is low. Some data could not be saved

    After having the Treo 755p device for only a week, I am receiving a message "The free data storage space on the device is low. Some data could not be saved."  I have over 55 mb of free space according to the device info page. I cannot find anything about this in the knowledge library, can anyone offer help?
    Post relates to: Treo 755p (Sprint)
    Post relates to: Treo 755p (Sprint)

    My problems are similar to Lauren, but backwards.  For months now I have had problems with my Treo 755 reseting randomly.  Sometimes during calls, sometimes while typing, sometimes while just sitting in the holster.   As many as a dozen a day and then not at all for 2 weeks, no rhyme nor reason.  This Free Space error though is brand new to me today and quite strange.   The error is showing up when I bring my phone out of "sleep".  I press the end button to sleep the phone and everything is fine.  When I press it again to wake the phone up the error is sitting behind the keylock waiting for me.
    Obviously I have done nothing yet for this new memory error, but I've tried everything for the reseting.  Deleting software, Soft and hard resets, software updates, everything I and Sprint support can think of.  With this new Free space error now I am going to just take it to a service center and demand a new phone.  I think there is some kind of hardware error as I have done nothing in the programming.  No new programs, no deleted programs, no changed settings.
    I loved my Treo 650 and never had any problems with it.  I am getting very disappointed in Palm now though with these ridiculous problems. 
    Post relates to: Treo 755p (Sprint)

  • Data storage in Essbase

    Hi Experts,
    Could you please guide me how the non numeric data is stored in Essbase (which is not metadata). Ex: Dates, Text Etc...
    Thanks
    N Kumar

    Traditionally, we learned that Essbase could not store text. This is because of the way the database engine is optimized for financial data. This is still true, although in version 11, Oracle has added the ability to store “typed measures,” which means dates or enumerated strings. This is nice, but an enumerated string must be defined and stored in a separate relational database. With the Store Text In Essbase Add-In, you store the string directly in Essbase. The disadvantage of this add-in, however, is that it is limited in the size of strings it can store without spanning multiple members (see the “Sample” workbook tab in the Excel sheet for examples.)
    If you think of the doubles that are stored in Essbase as strings instead of numbers, you can see that they are composed from the Arabic numerical symbols 0123456789. This is their symbol set. In the Latin alphabet we use in English, there are 26 letters, or symbols. Combining lower and uppercase letters, the Arabic numerals, and a few punctuation characters, we get 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ., - The Store Text In Essbase add-in simply compresses a string encoded in one symbol set (e.g., the Latin alphabet) into the smaller symbol set that Essbase is able to store. That process is reversed to read the stored string back out of the database. The add-in uses two functions, one to compress the string (doubleFromString) and one to decompress it (stringFromDouble.) Here is their definition:
    Function doubleFromString(ByVal strIn As String, ByVal strSymbols As String)
    Function stringFromDouble(ByVal dblInTotal As Double, ByVal strSymbols As String)
    It is only possible to store 8 characters in the symbol set above per outline member, so a string longer than 8 must be chopped into pieces that are stored across multiple outline members (see the “Sample” workbook tab for examples.)
    For more info visit:
    http://code.google.com/p/store-text-in-essbase/
    Cheers...!!!

  • Non transactional data source and ejb transaction

    Inside an ejb method with trans-attribute = Required,
    Do a bunch of things using a transactional data source and a bunch of things using
    a non trasnactional data source.
    Looks like the time spent doing the non-transactional data source related work
    does not count for the transaction timeout defined for the ejb.
    So, what happens here, the ejb transaction is suspended ( when I start using the
    non transactional ds ) ?

    Hi,
    "siddiqut" <[email protected]> wrote in message news:3fa7c79d$[email protected]..
    Inside an ejb method with trans-attribute = Required,
    Do a bunch of things using a transactional data source and a bunch of things using
    a non trasnactional data source.
    Looks like the time spent doing the non-transactional data source related work
    does not count for the transaction timeout defined for the ejb.
    So, what happens here, the ejb transaction is suspended ( when I start using the
    non transactional ds ) ?The transaction is not suspended when you call something
    which is not non-transactional.
    Regards,
    Slava Imeshev

  • Where to search for a specific Dimention related data

    Hi,
    I guess, hyperion planning store the dimention related data( parent, child, uda,attributes, consolidation operator, data storage etc) in some relational tables of that planning application. Can anybody help me understand where & how those data is stored and what are the table name I should look for a particular dimention related data?
    Actually I need to look into the planning RDBMS table to get the membernames of one particular dimention and search another huge Oracle database to search for those and retrieve the relevant data writing a query. I am using Planning ver9.3.1
    Please revert back for any clarification.
    Regards.

    Hi,
    Take a look at below tables in your application repository schema (db), they are all linked through id fields and they include dimensional infromation.
    HSP_OBJECT
    HSP_OBJECT_TYPE
    HSP_DIMENSION
    HSP_MEMBER
    HSP_ALIAS
    You get detail information from HSP_OBJECT. HSP_OBJECT keeps entire details for entire metadata. Other tables will help you understand the relations, positions etc.
    Cheers,
    Alp

  • Is it possible to integrate relational data with OLAP cubes?

    I have a web application that accesses cubes created from AWM via the OLAP API. I need to integrate a column from a relational table in the front application and display the column along side cube data.
    Is there any way to achieve the functionality from the OLAP API?

    Can you explain how the relational data source relates to the OLAP data, is it a master-detail relationship? If this is the case then you could consider the following:
    1) Depending on how you are displaying the OLAP data. If you are using a non-BI Beans presentation bean then if the keys are consistent across both data sources it should be possible to create two separate queries and glue them together using the common keys within your data source module.
    2) Alternatively, you create a custom text measure within AWM and then use OLAP DML to extract the detail data and load it into a multi-line text variable that could be retrieved via OLAPI. This might not work if there is a large number of rows within the text variable to retrieve as formatting the results within your application might get complicated. The OLAP DML Help contains a lot of excellent examples that will help you create a program that uses SQL commands to load data.
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • ICloud Data Storage Guidelines

    We just got this message from Apple. I'm not a developer by trade, I'm a designer, but I don't recall seeing anywhere in the CS 5.5 App development process that allowed for the level of specificity they are referring to below. Is this something that needs to be handled by Adobe or is there some place in the Apple Developer Portal that these changes can be made?
    Dear Developer,
    In recent testing it appears that your app, XXXXXXXXXXXXXXXX, stores a fair amount of data in its Documents folder.
    Since iCloud backups are performed daily over Wi-Fi for each user's iOS device, it's important to ensure the best possible user experience by minimizing the amount of data being stored by your app.
    In addition to purchased music, apps, books, Camera roll, and device settings, everything in your app's home directory, including its Documents folder, is backed up to iCloud.
    Data stored in the application bundle itself, the caches directory, and the temp directory is not backed up to iCloud. Your app should store data in these locations according to the iCloud Data Storage Guidelines on <http://developer.apple.com/icloud/documentation/data-storage/>.
    Please review these guidelines, make any required changes to your app, and submit an update to the App Store.
    If you're not the technical contact for your app, please make sure this email gets to your development team.
    If you have any questions concerning this information, please let me know.
    Thanks for developing for iOS!

    When we release the set of tools for Newsstand (viewer builder and a VB backend that will build Newstand enabled apps), the code base you build against for Newstand enabled viewers will put the storage of folios in an Apple approved location. Currently we are in a day for day waiting period until Apple releases the GM of the iOS 5 SDK.  Once they release the SDK we will incorporate it into our system, test to insure there were no changes between the iOS SDK betas and GM and release a set of tools as soon as we are confident that viewers built on this code base will pass Apple approval.
    Unfortunately I can't provide a timetable because I have no idea when the GM SDK will be released. All I can say is that we are anxiously awaiting the GM drop and are ready to move on it, the minute it is released. We understand the importance of Newsstand to our customers.
    Non newstand Viewers built this version of viewer builder against the V15 DPS release will also work for iOS 4 devices and will move the storage of folios into an Apple approved location.

Maybe you are looking for

  • Windows update error 800f0246 when installing printer driver

    I have Win 7 pro 64 bit and an HP3520 printer.  HP site says a Windows 7 driver is available via windows update.  When I go into Windows update, the drivers shows up under optional downloads.  When I try to download and install it it fails with error

  • Cannot open NEF files from D200 in Camera Raw 4.1

    I am using Photoshop CS3 with Windows XP. I have downloaded the latest Camera Raw update (4.1) and installed as directed. I am able to open Raw files from Nikon D70 in camera raw but unable to open files from D200 - I am getting the message that it i

  • P1102w printing skewed

    Hi Guys I need your piece of advice ! Firstly, sorry for any language mistakes ! English is not my mother tongue. Month ago I bought brand new p1102w. I was really happy until I realized that all printed pages are skewed (crooked) like 1-3mm, but sti

  • Parallel Port & USB Connections

    I have several older Macs & a Windows PC, all connected to an Apple 16/600PS laser printer. The arrangement has worked fine and minimized duplicating hardware. I'm thinking of upgrading the PC, but all newer models do NOT have parallel port connectio

  • Install A Oracle VM 3.2.1 enviornment on Oracle X3-2 & storageTek 6180 fail

    Hi, I am looking for a help here, base on we do suffering on Install A Oracle VM enviornment on Oracle X3-2 & storageTek 6180, we set up all the enviorment, but facing when we create the Oracle VM server pool will facing the failure ( pls ref the fol