Probably a bug

Hi, I'm writing this XML and XSL code and it's running perfectly on "XMLSPY" but when i use it on Oracle portal I receive the error the page could not be displayed
plz fine the XML and XSL file hereunder:
-----XML-----
<?xml version="1.0" encoding="ISO-8859-1"?>
<?xml-stylesheet type="text/xsl" href="C:\WINNT\Profiles\sitel_em\Desktop\test.xsl"?>
<dataset>
<headlines code="PRESS/FR" language="">
<headline storyid="999153468nL30462124" date="30 Aug 2001" time="08:37" lang="EN">
<code>[G][RNP][PGE][PMF][EMK][FR][PRESS][LEN][RTRS][PRESS/FR]</code>
<text>PRESS DIGEST - France - Aug 30</text>
</headline>
<headline storyid="999067370nL29426422" date="29 Aug 2001" time="08:43" lang="EN">
<code>[G][RNP][PGE][PMF][EMK][FR][PRESS][LEN][RTRS][PRESS/FR]<VOWG.DE><EAUG.PA><FMTX.LN><SOGN.PA><FTE.PA><LYOE.PA><EDF.UL></code& gt;
<text>PRESS DIGEST - France - August 29</text>
</headline>
<headline storyid="998890909nL27416527" date="27 Aug 2001" time="07:43" lang="EN">
<code>[G][RNP][PGE][PMF][EMK][FR][PRESS][LEN][RTRS][PRESS/FR]<CRLP.PA><MRK.N><PHA.N><BAYG.DE><EAUG.PA><FTE.PA><TMM.PA><TECF.PA> ;<LAFP.PA></code>
<text>PRESS DIGEST - France - Aug 27</text>
</headline>
<next_headline storyid="998891005nL2463852"/>
</headlines>
<headlines code="SPO FR AND" language="">
<headline storyid="999158007n0948512" date="30 Aug 2001" time="09:53" lang="EN">
<code>[SPO][RNP][DNP][PSP][EMK][GB][IT][FR][SOCC][LEN][RTRS]</code>
<text>Soccer-Blanc arrives but move to United still not finalised</text>
</headline>
<headline storyid="999145169nN29197209" date="30 Aug 2001" time="06:19" lang="EN">
<code>[SPO][AUF][RNP][DNP][PSP][EMK][TENN][US][BE][AU][HR][BR][CZ][CH][ES][RU][FR][JP][SE][DE][EC][IT][WEU][EUROPE][LEN][RTRS]</code>
<text>UPDATE 3-Tennis-Treble chasing Rafter sprints through</text>
</headline>
<headline storyid="999126109nN29285117" date="30 Aug 2001" time="01:01" lang="EN">
<code>[SPO][RNP][DNP][PSP][EMK][TENN][US][FR][WEU][EUROPE][ES][LEN][RTRS]</code>
<text>Tennis-Clement wins battle of the giant-killers</text>
</headline>
<next_headline storyid="999126105nAX2242321"/>
</headlines>
<headlines code=".PAFR" language="">
<headline storyid="999160767nL30307903" date="30 Aug 2001" time="10:39" lang="FR">
<code>[FA][FB][DNP][PMF][STX][FR][WEU][EUROPE][LFR][RTRS][.PAFR]<ORA.PA><FTE.PA><ACCP.PA><STM.PA><CGEP.PA></code>
<text>La Bourse de Paris hisite avant la BCE, pataquhs sur France Tel</text>
</headline>
<headline storyid="999156701nL30458520" date="30 Aug 2001" time="09:31" lang="FR">
<code>[FA][DNP][PMF][STX][FR][WEU][EUROPE][LFR][RTRS][.PAFR]<FTE.PA><ORA.PA><CARR.PA><ACCP.PA><CGEP.PA></code>
<text>La Bourse de Paris en bref - Ouverture en repli de 0,16%</text>
</headline>
<session_id>fDpk9U-048Yng6*jaZCOEvkIqUBPF3JPmvM*D8n-KaN9p</session_id>
</headlines>
</dataset>
------XSL-----
<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<html>
<link rel="stylesheet" type="text/css" href="/images/tiers.css">
<body>
< !-- <xsl:variable name="session" select="dataset/headlines/session_id"/> -->
<table>
<th>Revue de presse</th>
<th>Sport</th>
<th>Bourse de Paris</th>
<th>Business</th>
<th>Politique</th>
<th>Monde</th>
</table>
<xsl:for-each select="dataset/headlines">
<xsl:if test="@code='PRESS/FR'">
<a name="part1">
<h3>Revue de presse</h3>
<table>
<xsl:for-each select="headline">
<tr><td class="DATENEWS">
<xsl:value-of select="@date"/>
<xsl:value-of select="@time"/>
</td><td class="NEWS">
<a >
<xsl:attribute name="href"> http://ri2.rois.com/ppppp/CTIB/RI3APINEWS?TEXT=<xsl:value-of select="@storyid"/>
</xsl:attribute>
<xsl:value-of select="text"/>
</a></td></tr>
</xsl:for-each>
</table>
</a>
</xsl:if>
<xsl:if test="@code='SPO FR AND'">
<a name="part2">
<h3>Sport</h3>
<table>
<xsl:for-each select="headline">
<tr><td class="DATENEWS">
<xsl:value-of select="@date"/>
<xsl:value-of select="@time"/>
</td><td class="NEWS">
<a >
<xsl:attribute name="href"> http://ri2.rois.com/ppppp/CTIB/RI3APINEWS?TEXT=<xsl:value-of select="@storyid"/>
</xsl:attribute>
<xsl:value-of select="text"/>
</a>
</td></tr>
</xsl:for-each>
</table>
</a>
</xsl:if>
<xsl:if test="@code='.PAFR'">
<a name="part3">
<h3>Bourse de Paris</h3>
<table>
<xsl:for-each select="headline">
<tr><td class="DATENEWS">
<xsl:value-of select="@date"/>
<xsl:value-of select="@time"/>
</td><td class="NEWS">
<a >
<xsl:attribute name="href"> http://ri2.rois.com/ppppp/CTIB/RI3APINEWS?TEXT=<xsl:value-of select="@storyid"/>
</xsl:attribute>
<xsl:value-of select="text"/>
</a>
</td></tr>
</xsl:for-each>
</table>
</a>
</xsl:if>
<xsl:if test="@code='JOUR'">
<a name="part4">
<h3>Business</h3>
<table>
<xsl:for-each select="headline">
<tr><td class="DATENEWS">
<xsl:value-of select="@date"/>
<xsl:value-of select="@time"/>
</td><td class="NEWS">
<a >
<xsl:attribute name="href"> http://ri2.rois.com/ppppp/CTIB/RI3APINEWS?TEXT=<xsl:value-of select="@storyid"/>
</xsl:attribute>
<xsl:value-of select="text"/>
</a>
</td></tr>
</xsl:for-each>
</table>
</a>
</xsl:if>
<xsl:if test="@code='POL FR AND'">
<a name="part5">
<h3>Politique</h3>
<table>
<xsl:for-each select="headline">
<tr><td class="DATENEWS">
<xsl:value-of select="@date"/>
<xsl:value-of select="@time"/>
</td><td class="NEWS">
<a >
<xsl:attribute name="href"> http://ri2.rois.com/ppppp/CTIB/RI3APINEWS?TEXT=<xsl:value-of select="@storyid"/>
</xsl:attribute>
<xsl:value-of select="text"/>
</a>
</td></tr>
</xsl:for-each>
</table>
</a>
</xsl:if>
<xsl:if test="@code='G FR AND'">
<a name="part6">
<h3>Monde</h3>
<table>
<xsl:for-each select="headline">
<tr><td class="DATENEWS">
<xsl:value-of select="@date"/>
<xsl:value-of select="@time"/>
</td><td class="NEWS">
<a >
<xsl:attribute name="href"> http://ri2.rois.com/ppppp/CTIB/RI3APINEWS?TEXT=<xsl:value-of select="@storyid"/>
</xsl:attribute>
<xsl:value-of select="text"/>
</a>
</td></tr>
</xsl:for-each>
</table>
</a>
</xsl:if>
</xsl:for-each>
</body>
</link>
</html>
</xsl:template>
</xsl:stylesheet>
Plz advice and if you want i will send you by mail the complete document
Best Regards
null

Haven't heard anything about SMS sort order problem but the problem with notifications not going away is common. You can try to restart the SMS application but pressing and holding the Home button when you're in the SMS application until the home screen appears.
You can also try restoring the phone and reinstalling the firmware. (if you have everything backed up, you won't loose any information)

Similar Messages

  • Second battery does not report status correct - probably BIOS bug

    Hi,
    I have a T61 laptop and recently bought a second battery (for the CD drive slot).
    The problem is, the status of the second battery is not properly reported. Sometimes it doesn't show the status (charging, discharging etc.), sometimes even not the capacity. This is a problem, as this can cause my laptop to shut down when one battery is running out of power, while the other is still full.
    This happens both in windows and linux. You can see some more details here:
    https://bugzilla.kernel.org/show_bug.cgi?id=17832
    I'm pretty sure that this is a bug in the BIOS. Is there any way I can contact the people who are working on the BIOS? I tried to use the web support form to get in contact, but the support told me to contact the nearest service center - not very helpful.

    On its home page http://www.ngolde.de/yacpi.html is a mention that there may possibly problems with battery status. You should probably try to contact the developer.

  • Collator and Turkish (probably a bug)

    Hi,
    Turkish has 2 unique letters pairs:
    '\u0130' & 'i' ('&#304' & 'i') which correspond to English 'I', & 'i'
    'I', & '\u0130' ('I' & '&#305;') which don't exist as letters in English and represent back-vowel pairs of English 'I', & 'i'.
    If you didn't get them above, you can check them out at:
    http://www.prustinteractive.com/toolbox/font/
    In other words, English I i are both with a dot in Turkish, and the back-vowel versions of them are both dotless.
    My task at hand is:
    sort them alphabetically, ignoring case, but naturally capturing the dot difference.
    From the API it appears that either:
    langCollator.setStrength(Collator.SECONDARY|Collator.CANONICAL_DECOMPOSITION);
    or
    langCollator.setStrength(Collator.SECONDARY);
    should do the job.
    However, all combinations of containing PRIMARY & SECONDARY fail to distinguish between the dotfulls and the dotless. The only thing that gets both of them to compare != 0 is TERTIARY or (a logical | with) Collator.FULL_DECOMPOSITION. But the moment i do that i am no longer able to ignore case.
    I did a lot of testing, and am convinced that i'll need to submit a bug, but wanted to get any feedback first. If it is a bug, i'll have to figure out RuleBasedCollator, probably by listing the entire alphabet.
    P.S. In fact, letters '�' and 'o', for example, don't compare to 0 in Turkish version of Collator even with Collator.PRIMARY, apparently because they are a part of the alphabet, whereas '�' and 'a' always compare to 0 even with SECONDARY, which is again a bug even though '�' and 'a' are not part of the alphabet.
    I'm not really concerned whether PRIMARY or SECONDARY should compare the dotless and the dotfulls != 0, but right now none of them does, if there is a requirement of ignoring case.
    Thanks for any feedback,
    Reshat.

    Hi,
    a correction:
    in the message below, in P.S.:
    "whereas '�' and 'a' always compare to 0 even with SECONDARY, which is again a bug even though '�' and 'a' are not part of the alphabet."
    should have been written as:
    "but '�' and 'a' also != 0 even with PRIMARY, which is again a bug because '�' and 'a' are not part of the Turkish alphabet."
    Sorry for my mistake in previous message.

  • Problem with 2 View Objects based on One Entity -Probably a Bug in ADF BC

    Hi
    I am using JDeveloper 10.1.3(SU5) and adf faces and ADF BC and to explain my problem I use HR schema.
    First, I created 2 view objects based on countries table named as TestView1 and TestView2. I set TestView1 query where clause to region_id=1 and TestView2 query where clause to region_id!=1 in the view object editor and then I created 2 separated form on these 2 view objects by dragging and dropping from data control palette.
    Now when I insert a record in the form based on TestView1 with region_id set to 1 and commit the record and go to the next form I can see the record in the second form which is completely wrong since it is against the where clause statement of the second form.
    I am really confused and the situation is very wired and it seems to me something like bug in adf bc.Am I right.Is there any work around or solution for solving this problem.
    Any help would be highly appreciated.
    Best Regards,
    Navid

    Dear Frank,
    Thank you very much for your quick response.
    Reading your helpful comments now I have some questions:
    1- I have commited the record in the database so shouldn't the query of view objects be re-queried?
    2- We try to use ClearVOCaches (entity_name,false) in afterCommit of the base entity object but unfortunately it does not work correctly. after that,We got root app module and used findViewObject method to find all the view of that entity (we have found them by using name not automaticlly) and called executeQuery of all views. From my point of view it has 2 big disadvantages. First suppose that this entity is an important entity and 4 or 5 viow objects are based on it. Now, For inserting one record we should re-execute 4 or 5 view which I think makes some performance issues. Besides, If during the development one programmer add a new view object based on this entity and don't add the executeQuery in the afterCommit for this view, again we have the same problem. Isn't there at least a way that automatically refresh all related view objects however the performance issue still exists.
    3- You mentioned that this issue is handled in the developer guide. Could you kindly give me a refrence which developer guide you mean and which section I should read to overcome this problem.(I have ADF Developer's Guide for Forms/4GL Developer's Guide , however I search for clearVOCaches and surprisingly nothing was found!!!)
    4- Could you please give me some hints that from your point of view what is the best method to solve this problem with minimum performance effect.
    Any comment would be of some great help.
    Thanks in advance,
    Navid

  • Probably another bug in 2.1

    I have noticed (actually it is hard not to) a bug in the 2.1 software for iPod Touch (1 Gen). For some 3rd party apps if the app requires some more processing (for games like Enigmo or Meta Squares) the display turns black and after some 10 seconds the iPod returns to the menu (iPod menu not app menu). It does not happen when I start the app, but some 10 minutes afterwards. Same thing happened also with some other apps too.
    I am pretty sure it is a bug because it happens with more than 5 apps. And ... it did not happen with the 2.0 software!
    Has anyone else also noticed this kind of behavior or I am the only one getting this?

    I just updated my Touch to 2.1. Five of my 3rd party apps will not open. Looks like something in the update is causing bugs in these apps. Previously when apps happened to be glitchy as you described, I would Resync the Touch and all would be OK.
    This time the apps will not open: I get a "cannot be opened" dialogue box.
    Anyone else have this problem?
    Perhaps I should email the apps developers to see if these apps need updates?
    Gretchen

  • Log file sync vs log file parallel write probably not bug 2669566

    This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
    Version : 9.2.0.8
    Platform : Solaris
    Application : Oracle Apps
    The number of commits per second ranges between 10 and 30.
    When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
    Below just 2 samples where the ratio is even about 20.
    "snap_time"     " log file parallel write avg"     "log file sync avg"     "ratio
    11/05/2008 10:38:26      8,142     156,343     19.20
    11/05/2008 10:08:23     8,434     201,915     23.94
    So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
    First I thought that I was hitting bug 2669566.
    But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
    And I think that it proves that I am NOT hitting this bug.
    Below is a sample of the output for the log writer.
    -- End of snap 3
    HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
    DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
    DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
    DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
    DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
    DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
    DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
    When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
    In the example above 781036 + 210432 = 991468 micro seconds.
    This is the case for all the snaps taken by snapper.
    So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
    So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
    Any clues?

    Yes that is true!
    But that is the way I calculate the average wait time = total wait time / total waits
    So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
    I use the query below:
    select snap_id
    , snap_time
    , event
    , time_waited_micro
    , (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
    , total_waits
    , (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
    , trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
    from (
    select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
    lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
    lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
    lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
    lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
    row_number() over (partition by event order by sn.snap_id) r
    from perfstat.stats$system_event se, perfstat.stats$snapshot sn
    where se.SNAP_ID = sn.SNAP_ID
    and se.EVENT = 'log file sync'
    order by snap_id, event
    where time_waited_micro - p_time_waited_micro > 0
    order by snap_id desc;

  • Unable to Execute QUery ORA -00911 Probably Oracle Bug Please confirm

    HI Gurus,
    Unable to run the query:
    Was able to run select query fine, was able to run insert query fine...but when combining and running, its throwng the follwoing error:
    Is this a bug in ORacle?
    Error starting at line 1 in command:
    INSERT INTO MigrationCorrespData1(did) SELECT did from revisions where (dInDate >={ts '2011-09-01 00:00:01'} and dInDate <={ts '2012-01-01 23:59:59'})
    Error at Command Line:1 Column:80
    Error report:
    SQL Error: ORA-00911: invalid character

    It's not a bug, your syntax just isn't Oracle syntax.
    I googled a bit (which you could have done yourself as easily) and it appears that your syntax is JDBC syntax. The Oracle equivalent is timestamp:
    select timestamp'2009-12-08 00:00:00.000' my_ts from dual;
    MY_TS                         
    08-DEC-09 12.00.00.000000000 AMEdited by: InoL on Jun 6, 2012 8:57 AM

  • The mystery of "ghost files" - (probably a bug?)

    Hey there,
    i recently noticed a very strange behavior regarding to folder sizes. I have to handle some huge amounts of files from time to time. This files are stored in directories  in my home directory. Then i’m going to delete a huge amount of that files. But after deleting the folder size stays the same, even if there are no files in that folder.
    So i decided to set up a little experiment.
    $ mkdir test{1,2}
    $ du -hs test{1,2}
    4,0K test1
    4,0K test2
    $ for i in {1..5000}; do touch test1/$i; done
    $ du -hs test{1,2}
    76K test1
    4,0K test2
    $ rm test1/*
    $ ls -a test{1,2}
    test1:
    test2:
    $ du -hs test{1,2}
    76K test1
    4,0K test2
    dirk ~ $
    First I created two empty directories (proven by using du), then I started a loop with 5000 iterations to create 5000 empty files in that directory. Now the directory contains informations with an over-all size of 76K, which is correct, because of the directory contains 5000 files. Then i cleared the whole directory by using rm. now the directory is empty (proven by ls which only shows . and .. in both directories). But using du again shows, that the directory test1 (which contained the 5000 files before deleting them) still has a size of 76K despite it’s as empty as test2.
    Both of the directories (test1 and test2) contain the same amount of files (none except . and ..), but why is the size of test1 76K and the size of test2 only 4K?
    I know why test1 was 76K while it contained the 5000 files. But why is it still 76K after deleting the files, and why gets the size not adjusted after deleting the files?
    I’m looking forward to your explainations.
    Thanks in advance!
    Kind regards,
    Dirk

    So i waste my disk space (5000 was only for testing purposes) because of thousands and thousands of useless inode pointers in a directory? Is there a way to change this behavior?
    Edit: Well, maybe there is, but i don’t know, but i created a little Python script to get rid of that useless inode pointers:
    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    # vim: ts=4:sw=4
    # CC-by-sa, Dirk Sohler, [email protected]
    """ It seems like Linux keeps inode pointers for already deleted files in
    directory definitions. By using this script the problem still is there, but
    the results are fixed by moving all files into a new directory (keeping
    file metadata)
    import os
    from optparse import OptionParser
    import tempfile
    import shutil
    def opts():
    """Parses the options given by user"""
    parser = OptionParser()
    parser = OptionParser(
    usage='%prog [options] {directory name(s)}',
    version='%prog 0.1')
    return parser.parse_args()
    def getsize(dirs):
    s = 0
    for d in dirs:
    s += os.stat(d).st_size
    return s
    def mvdirs(dirs):
    count = 0
    for d in dirs:
    dirname = d
    tmpdir = tempfile.mkdtemp()
    shutil.copytree(d, tmpdir + '/d')
    shutil.rmtree(d)
    shutil.move(tmpdir + '/d', dirname)
    shutil.rmtree(tmpdir)
    count += 1
    return count
    def main():
    o,dirs = opts()
    os = getsize(dirs)
    ct = mvdirs(dirs)
    ns = getsize(dirs)
    sd = os - ns
    mb = round(sd/1024.0/1024, 2)
    print('You just saved %s bytes (%s MB) by processing %s directories'
    % (sd,mb,ct))
    if __name__ == '__main__':
    main()
    Call with directory name(s):
    $ cd "my huge directories"
    $ antiwaste.py *
    You just saved 6434816 bytes (6.14 MB) by processing 31 directories
    $
    This script was done quick’n’dirty. Use at your own risk
    Last edited by Dirk Sohler (2011-01-31 04:17:58)

  • PDF books added in iTunes 11.1.3 shows up in Music Albums tab [Probably a bug]

    iTunes 11.1.3 still allows you to add PDF books to library and the imported books show up in the albums view under the music selection. Seems weird, as books tab was removed from itunes. Importing 10 books results in creation of an album tab which is populated with the books added. It marks the album as unknown artist and unknown album.

    Hi there jack4120,
    You may find the troubleshooting steps in the article below helpful.
    iTunes for Windows Vista, Windows 7, or Windows 8: Fix unexpected quits or launch issueshttp://support.apple.com/kb/ts1717
    -Griff W. 

  • The Bug about 'DB_SECONDARY_BAD' still exists in BerkeleyDB4.8!

    The Bug about 'DB_SECONDARY_BAD' still exists in BerkeleyDB4.8?
    I'm sorry for my poor English, But I just cannot find anywhere else for help.
    Thanks for your patience first!
    I'm using BDB4.8 C++ API on Ubuntu 10.04, Linux Kernel 2.6.32-24-generic
    $uname -a
    $Linux wonpc 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:17:33 UTC 2010 i686 GNU/Linux
    When I update(overwrite) a record in database, I may get a DB_SECONDARY_BAD exception,
    What's worse, This case doesn't always occures, it's random. So I think it probably a bug
    of BDB, I have seen many issues about DB_SECONDARY_BAD with BDB4.5,4.6...
    To reproduce the issue, I make a simplified test program from my real program.
    The data to be stroed into database is a class called 'EntryData', It's defined in db_access.h,
    where also defines some HighLevel API functions that hide the BDB calls, such as
    store_entry_data(), which use EntryData as its argument. The EntryData have a string-type
    member-data 'name' and a vector<string>-type mem-data 'labels', So store_entry_data() will
    put the real data of EntryData to a contiguous memory block. The get_entry_data() returns
    an EntryData builed up from the contiguous memory block fetched from database.
    The comlete test program is post following this line:
    /////////db_access.h////////////
    #ifndef __DB_ACCESS_H__
    #define __DB_ACCESS_H__
    #include <string>
    #include <vector>
    #include <db_cxx.h>
    class EntryData;
    //extern Path DataDir; // default value, can be changed
    extern int database_setup();
    extern int database_close();
    extern int store_entry_data(const EntryData&, u_int32_t = DB_NOOVERWRITE);
    extern int get_entry_data(const std::string&, EntryData*, u_int32_t = 0);
    extern int rm_entry_data(const std::string&);
    class DBSetup
    // 构造时调用database_setup, 超出作用域自动调用database_close .
    // 该类没有数据成员.
    public:
    DBSetup() {
    database_setup();
    ~DBSetup() {
    database_close();
    class EntryData
    public:
    typedef std::vector<std::string> LabelContainerType;
    EntryData() {}
    EntryData(const std::string& s) : name(s) {}
    EntryData(const std::string& s, LabelContainerType& v)
    : name(s), labels(v) {}
    EntryData(const std::string&, const char*[]);
    class DataBlock;
    // 直接从内存块中构建, mem指针将会从数据库中获取,
    // 它就是EntryData转化成的DataBlock中buf_ptr->buf的内容.
    EntryData(const void* mem_blk, const int len);
    ~EntryData() {};
    const std::string& get_name () const { return name; }
    const LabelContainerType& get_labels() const { return labels; }
    void set_name (const std::string& s) { name = s; }
    void add_label(const std::string&);
    void rem_label(const std::string&);
    void show() const;
    // get contiguous memory for all:
    DataBlock get_block() const { return DataBlock(*this); }
    class DataBlock
    // contiguous memory for all.
    public:
    DataBlock(const EntryData& data);
    // 引进一块内存作为 buf_ptr->buf 的内容.
    // 例如从数据库中获取结果
    DataBlock(void* mem, int len);
    // 复制构造函数:
    DataBlock(const DataBlock& orig) :
    data_size(orig.data_size),
    capacity(orig.capacity),
    buf_ptr(orig.buf_ptr) { ++buf_ptr->use; }
    // 赋值操作符:
    DataBlock& operator=(const DataBlock& oth)
    data_size = oth.data_size;
    capacity = oth.capacity;
    if(--buf_ptr->use == 0)
    delete buf_ptr;
    buf_ptr = oth.buf_ptr;
    return *this;
    ~DataBlock() {
    if(--buf_ptr->use == 0) { delete buf_ptr; }
    // data()函数因 Dbt 构造函数不支持const char*而被迫返回 char*
    // data() 返回的指针是应该被修改的.
    const char* data() const { return buf_ptr->buf; }
    int size() const { return data_size; }
    private:
    void pack_str(const std::string& s);
    static const int init_capacity = 100;
    int data_size; // 记录数据块的长度.
    int capacity; // 已经分配到 buf 的内存大小.
    class SmartPtr; // 前向声明.
    SmartPtr* buf_ptr;
    class SmartPtr
    friend class DataBlock;
    char* buf;
    int use;
    SmartPtr(char* p) : buf(p), use(1) {}
    ~SmartPtr() { delete [] buf; }
    private:
    std::string name; // entry name
    LabelContainerType labels; // entry labels list
    }; // class EntryData
    #endif
    //////db_access.cc/////////////
    #include <iostream>
    #include <cstring>
    #include <cstdlib>
    #include <vector>
    #include <algorithm>
    #include "directory.h"
    #include "db_access.h"
    using namespace std;
    static Path DataDir("~/mydict_data"); // default value, can be changed
    const Path& get_datadir() { return DataDir; }
    static DbEnv myEnv(0);
    static Db db_bynam(&myEnv, 0); // using name as key
    static Db db_bylab(&myEnv, 0); // using label as key
    static int generate_keys_for_db_bylab
    (Db* sdbp, const Dbt* pkey, const Dbt* pdata, Dbt* skey)
    EntryData entry_data(pdata->get_data(), pdata->get_size());
    int lab_num = entry_data.get_labels().size();
    Dbt* tmpdbt = (Dbt*) malloc( sizeof(Dbt) * lab_num );
    memset(tmpdbt, 0, sizeof(Dbt) * lab_num);
    EntryData::LabelContainerType::const_iterator
    lab_it = entry_data.get_labels().begin(), lab_end = entry_data.get_labels().end();
    for(int i = 0; lab_it != lab_end; ++lab_it, ++i) {
    tmpdbt[ i ].set_data( (void*)lab_it->c_str() );
    tmpdbt[ i ].set_size( lab_it->size() );
    skey->set_flags(DB_DBT_MULTIPLE | DB_DBT_APPMALLOC);
    skey->set_data(tmpdbt);
    skey->set_size(lab_num);
    return 0;
    //@Return Value: return non-zero at error
    extern int database_setup()
    const string DBEnvHome (DataDir + "DBEnv");
    const string dbfile_bynam("dbfile_bynam");
    const string dbfile_bylab("dbfile_bylab");
    db_bylab.set_flags(DB_DUPSORT);
    const u_int32_t env_flags = DB_CREATE | DB_INIT_MPOOL;
    const u_int32_t db_flags = DB_CREATE;
    rmkdir(DBEnvHome);
    try
    myEnv.open(DBEnvHome.c_str(), env_flags, 0);
    db_bynam.open(NULL, dbfile_bynam.c_str(), NULL, DB_BTREE, db_flags, 0);
    db_bylab.open(NULL, dbfile_bylab.c_str(), NULL, DB_BTREE, db_flags, 0);
    db_bynam.associate(NULL, &db_bylab, generate_keys_for_db_bylab, 0);
    } catch(DbException &e) {
    cerr << "Err when open DBEnv or Db: " << e.what() << endl;
    return -1;
    } catch(std::exception& e) {
    cerr << "Err when open DBEnv or Db: " << e.what() << endl;
    return -1;
    return 0;
    int database_close()
    try {
    db_bylab.close(0);
    db_bynam.close(0);
    myEnv.close(0);
    } catch(DbException &e) {
    cerr << e.what();
    return -1;
    } catch(std::exception &e) {
    cerr << e.what();
    return -1;
    return 0;
    // 返回Dbt::put()的返回值
    int store_entry_data(const EntryData& e, u_int32_t flags)
    int res = 0;
    try {
    EntryData::DataBlock blk(e);
    // data()返回的buf中存放的第一个字符串便是 e.get_name().
    Dbt key ( (void*)blk.data(), strlen(blk.data()) + 1 );
    Dbt data( (void*)blk.data(), blk.size() );
    res = db_bynam.put(NULL, &key, &data, flags);
    } catch (DbException& e) {
    cerr << e.what() << endl;
    throw; // 重新抛出.
    return res;
    // 返回 Db::get()的返回值, 调用成功时 EntryData* e的值才有意义
    int get_entry_data
    (const std::string& entry_name, EntryData* e, u_int32_t flags)
    Dbt key( (void*)entry_name.c_str(), entry_name.size() + 1 );
    Dbt data;
    data.set_flags(DB_DBT_MALLOC);
    int res = db_bynam.get(NULL, &key, &data, flags);
    if(res == 0)
    new (e) EntryData( data.get_data(), data.get_size() );
    free( data.get_data() );
    return res;
    int rm_entry_data(const std::string& name)
    Dbt key( (void*)name.c_str(), name.size() + 1 );
    cout << "to remove: \'" << name << "\'" << endl;
    int res = db_bynam.del(NULL, &key, 0);
    return res;
    EntryData::EntryData(const std::string& s, const char* labels_arr[]) : name(s)
    {   // labels_arr 需要以 NULL 结尾.
    for(const char** i = labels_arr; *i != NULL; i++)
    labels.push_back(*i);
    EntryData::EntryData(const void* mem_blk, const int len)
    const char* buf = (const char*)mem_blk;
    int consumed = 0; // 已经消耗的mem_blk的大小.
    name = buf; // 第一串为 name
    consumed += name.size() + 1;
    for (string label = buf + consumed;
    consumed < len;
    consumed += label.size() + 1)
    label = buf + consumed;
    labels.push_back(label);
    void EntryData::add_label(const string& new_label)
    if(find(labels.begin(), labels.end(), new_label)
    == labels.end())
    labels.push_back(new_label);
    void EntryData::rem_label(const string& to_rem)
    LabelContainerType::iterator iter = find(labels.begin(), labels.end(), to_rem);
    if(iter != labels.end())
    labels.erase(iter);
    void EntryData::show() const {
    cout << "name: " << name << "; labels: ";
    LabelContainerType::const_iterator it, end = labels.end();
    for(it = labels.begin(); it != end; ++it)
    cout << *it << " ";
    cout << endl;
    EntryData::DataBlock::DataBlock(const EntryData& data) :
    data_size(0),
    capacity(init_capacity),
    buf_ptr(new SmartPtr(new char[init_capacity]))
    pack_str(data.name);
    for(EntryData::LabelContainerType::const_iterator \
    i = data.labels.begin();
    i != data.labels.end();
    ++i) { pack_str(*i); }
    void EntryData::DataBlock::pack_str(const std::string& s)
    int string_size = s.size() + 1; // to put sting in buf separately.
    if(capacity >= data_size + string_size) {
    memcpy(buf_ptr->buf + data_size, s.c_str(), string_size);
    else {
    capacity = (data_size + string_size)*2; // 分配尽可能大的空间.
    buf_ptr->buf = (char*)realloc(buf_ptr->buf, capacity);
    memcpy(buf_ptr->buf + data_size, s.c_str(), string_size);
    data_size += string_size;
    //////////// test_put.cc ///////////
    #include <iostream>
    #include <string>
    #include <cstdlib>
    #include "db_access.h"
    using namespace std;
    int main(int argc, char** argv)
    if(argc < 2) { exit(EXIT_FAILURE); }
    DBSetup setupup_mydb;
    int res = 0;
    EntryData ed(argv[1], (const char**)argv + 2);
    res = store_entry_data(ed);
    if(res != 0) {
         cerr << db_strerror(res) << endl;
         return res;
    return 0;
    // To Compile:
    // $ g++ -ldb_cxx -lboost_regex -o test_put test_put.cc db_access.cc directory.cc
    //////////// test_update.cc ///////////
    #include <iostream>
    #include <cstdlib>
    #include <string>
    #include <boost/program_options.hpp>
    #include "db_access.h"
    using namespace std;
    namespace po = boost::program_options;
    int main(int argc, char** argv)
    if(argc < 2) { exit(EXIT_SUCCESS); }
    DBSetup setupup_mydb;
    int res = 0;
    po::options_description cmdopts("Allowed options");
    po::positional_options_description pos_opts;
    cmdopts.add_options()
    ("entry", "Specify the entry that will be edited")
    ("addlabel,a", po::value< vector<string> >(),
    "add a label for specified entry")
    ("removelabel,r", po::value< vector<string> >(),
    "remove the label of specified entry")
    pos_opts.add("entry", 1);
    po::variables_map vm;
    store( po::command_line_parser(argc, argv).
    options(cmdopts).positional(pos_opts).run(), vm );
    notify(vm);
    EntryData entry_data;
    if(vm.count("entry")) {
    const string& entry_to_edit = vm["entry"].as<string>();
    res = get_entry_data( entry_to_edit, &entry_data );
    switch (res)
    case 0:
    break;
    case DB_NOTFOUND:
    cerr << "No entry named: \'"
    << entry_to_edit << "\'\n";
    return res;
    break;
    default:
    cerr << db_strerror(res) << endl;
    return res;
    } else {
    cerr << "No entry specified\n";
    exit(EXIT_FAILURE);
    EntryData new_entry_data(entry_data);
    typedef vector<string>::const_iterator VS_CI;
    if(vm.count("addlabel")) {
    const vector<string>& to_adds = vm["addlabel"].as< vector<string> >();
    VS_CI end = to_adds.end();
    for(VS_CI i = to_adds.begin(); i != end; ++i) {
    new_entry_data.add_label(*i);
    if(vm.count("removelabel")) {
    const vector<string>& to_rems = vm["removelabel"].as< vector<string> >();
    VS_CI end = to_rems.end();
    for(VS_CI i = to_rems.begin(); i != end; ++i) {
    new_entry_data.rem_label(*i);
    cout << "Old data| ";
    entry_data.show();
    cout << "New data| ";
    new_entry_data.show();
    res = store_entry_data(new_entry_data, 0); // set flags to zero permitting Over Write
    if(res != 0) {
    cerr << db_strerror(res) << endl;
    return res;
    return 0;
    // To Compile:
    // $ g++ -ldb_cxx -lboost_regex -lboost_program_options -o test_update test_update.cc db_access.cc directory.cc

    ////////directory.h//////
    #ifndef __DIRECTORY_H__
    #define __DIRECTORY_H__
    #include <string>
    #include <string>
    #include <sys/types.h>
    using std::string;
    class Path
    public:
    Path() {}
    Path(const std::string&);
    Path(const char* raw) { new (this) Path(string(raw)); }
    Path upper() const;
    void operator+= (const std::string&);
    // convert to string (char*):
    //operator std::string() const {return spath;}
    operator const char*() const {return spath.c_str();}
    const std::string& str() const {return spath;}
    private:
    std::string spath; // the real path
    inline Path operator+(const Path& L, const string& R)
    Path p(L);
    p += R;
    return p;
    int rmkdir(const string& path, const mode_t mode = 0744, const int depth = -1);
    #endif
    ///////directory.cc///////
    #ifndef __DIRECTORY_H__
    #define __DIRECTORY_H__
    #include <string>
    #include <string>
    #include <sys/types.h>
    using std::string;
    class Path
    public:
    Path() {}
    Path(const std::string&);
    Path(const char* raw) { new (this) Path(string(raw)); }
    Path upper() const;
    void operator+= (const std::string&);
    // convert to string (char*):
    //operator std::string() const {return spath;}
    operator const char*() const {return spath.c_str();}
    const std::string& str() const {return spath;}
    private:
    std::string spath; // the real path
    inline Path operator+(const Path& L, const string& R)
    Path p(L);
    p += R;
    return p;
    int rmkdir(const string& path, const mode_t mode = 0744, const int depth = -1);
    #endif
    //////////////////// All the code is above ////////////////////////////////
    Use the under command
    $ g++ -ldb_cxx -lboost_regex -o test_put test_put.cc db_access.cc directory.cc
    to get a test program that can insert a record to database.
    To insert a record, use the under command:
    $ ./test_put ubuntu linux os
    It will store an EntryData named 'ubuntu' and two labels('linux', 'os') to database.
    Use the under command
    $ g++ -ldb_cxx -lboost_regex -lboost_program_options -o test_update test_update.cc db_access.cc directory.cc
    to get a test program that can update the existing records.
    To update the record, use the under command:
    $ ./test_update ubuntu -r linux -a canonical
    It will update the with the key 'ubuntu', with the label 'linux' removed and a new
    label 'canonical'.
    Great thanks to you if you've read and understood my code!
    I've said that the DB_SECONDARY_BAD exception is random. The same operation may cause
    exception in one time and may goes well in another time.
    As I've test below:
    ## Lines not started with '$' is the stdout or stderr.
    $ ./test_put linux os linus
    $ ./test_update linux -r os
    Old data| name: linux; labels: os linus
    New data| name: linux; labels: linus
    $ ./test_update linux -r linus
    Old data| name: linux; labels: linus
    New data| name: linux; labels:
    dbfile_bynam: DB_SECONDARY_BAD: Secondary index inconsistent with primary
    Db::put: DB_SECONDARY_BAD: Secondary index inconsistent with primary
    terminate called after throwing an instance of 'DbException'
    what(): Db::put: DB_SECONDARY_BAD: Secondary index inconsistent with primary
    已放弃
    Look! I've received a DB_SECONDARY_BAD exception. But thus exception does not always
    happen even under the same operation.
    For the exception is random, you may have not the "luck" to get it during your test.
    So let's insert a record by:
    $ ./test_put t
    and then give it a great number of labels:
    $ for((i = 0; i != 100; ++i)); do ./test_update t -a "label_$i"; done
    and then:
    $ for((i = 0; i != 100; ++i)); do ./test_update t -r "label_$i"; done
    Thus, the DB_SECONDARY_BAD exception is almost certain to happen.
    I've been confused by the problem for times. I would appreciate if someone can solve
    my problem.
    Many thanks!
    Wonder

  • Re: BUG? APEX 4.0: ORA-20503 error editing report with 400+ columns

    Hello Everyone.
    I've run into something quite strange and am hoping you can help me.
    I am using Apex 4.0.1 and Oracle version 10.2.0.5. I've created a "classical" report in which the underlying SQL is a very simple:
    select * from pvtabThe Oracle table pvtab consists of 419 columns, all of which are varchar2(88) and number type. That's it.
    When I run the report, al of the columns show up as expected.
    However, when I go into the "Report Attributes" tab and click on one of the fields (any of them, it doesn't matter which one), I immediately get the following error:
    ORA-20503: Current version of data in database has changed since user initiated update process. current checksum = "598CAA7B68746A66F4B99E1512C36DED" application checksum = "0"If if replace the "*" with a few actual column names, then I am able to access any of these columns without problem.
    If I put back the "*", I then encounter this error again.
    I have never seen this error with other SQL SELECT statements in which I use the "*" qualifier to retrieve all columns from the table.
    And so, I am wondering if the error is caused because of the large number of columns (419) in my table.
    I've seen this same error mentioned in connection with forms but never with a report.
    So, is there some limit to the number of columns one can have in a "classic" or interactive report?
    Any idea why I would be getting this error?
    Here is the DDL for my table pvtab:
    CREATE TABLE  "PVTAB"
       (     "MICRO" VARCHAR2(4),
         "PRIM" VARCHAR2(4),
         "UNIT" NUMBER,
         "SEC_REF_1" NUMBER,
         "SECN_1" VARCHAR2(88),
         "SEC_REF_2" NUMBER,
         "SECN_2" VARCHAR2(88),
         "SEC_REF_3" NUMBER,
         "SECN_3" VARCHAR2(88),
         "SEC_REF_4" NUMBER,
         "SECN_4" VARCHAR2(88),
         "SEC_REF_5" NUMBER,
         "SECN_5" VARCHAR2(88),
         "SEC_REF_6" NUMBER,
         "SECN_6" VARCHAR2(88),
         "SEC_REF_7" NUMBER,
         "SECN_7" VARCHAR2(88),
         "SEC_REF_8" NUMBER,
         "SECN_8" VARCHAR2(88),
         "SEC_REF_9" NUMBER,
         "SECN_9" VARCHAR2(88),
         "SEC_REF_10" NUMBER,
         "SECN_10" VARCHAR2(88),
         "SEC_REF_11" NUMBER,
         "SECN_11" VARCHAR2(88),
         "SEC_REF_12" NUMBER,
         "SECN_12" VARCHAR2(88),
         "SEC_REF_13" NUMBER,
         "SECN_13" VARCHAR2(88),
         "SEC_REF_14" NUMBER,
         "SECN_14" VARCHAR2(88),
         "SEC_REF_15" NUMBER,
         "SECN_15" VARCHAR2(88),
         "SEC_REF_16" NUMBER,
         "SECN_16" VARCHAR2(88),
         "SEC_REF_17" NUMBER,
         "SECN_17" VARCHAR2(88),
         "SEC_REF_18" NUMBER,
         "SECN_18" VARCHAR2(88),
         "SEC_REF_19" NUMBER,
         "SECN_19" VARCHAR2(88),
         "SEC_REF_20" NUMBER,
         "SECN_20" VARCHAR2(88),
         "SEC_REF_21" NUMBER,
         "SECN_21" VARCHAR2(88),
         "SEC_REF_22" NUMBER,
         "SECN_22" VARCHAR2(88),
         "SEC_REF_23" NUMBER,
         "SECN_23" VARCHAR2(88),
         "SEC_REF_24" NUMBER,
         "SECN_24" VARCHAR2(88),
         "SEC_REF_25" NUMBER,
         "SECN_25" VARCHAR2(88),
         "SEC_REF_26" NUMBER,
         "SECN_26" VARCHAR2(88),
         "SEC_REF_27" NUMBER,
         "SECN_27" VARCHAR2(88),
         "SEC_REF_28" NUMBER,
         "SECN_28" VARCHAR2(88),
         "SEC_REF_29" NUMBER,
         "SECN_29" VARCHAR2(88),
         "SEC_REF_30" NUMBER,
         "SECN_30" VARCHAR2(88),
         "SEC_REF_31" NUMBER,
         "SECN_31" VARCHAR2(88),
         "SEC_REF_32" NUMBER,
         "SECN_32" VARCHAR2(88),
         "SEC_REF_33" NUMBER,
         "SECN_33" VARCHAR2(88),
         "SEC_REF_34" NUMBER,
         "SECN_34" VARCHAR2(88),
         "SEC_REF_35" NUMBER,
         "SECN_35" VARCHAR2(88),
         "SEC_REF_36" NUMBER,
         "SECN_36" VARCHAR2(88),
         "SEC_REF_37" NUMBER,
         "SECN_37" VARCHAR2(88),
         "SEC_REF_38" NUMBER,
         "SECN_38" VARCHAR2(88),
         "SEC_REF_39" NUMBER,
         "SECN_39" VARCHAR2(88),
         "SEC_REF_40" NUMBER,
         "SECN_40" VARCHAR2(88),
         "SEC_REF_41" NUMBER,
         "SECN_41" VARCHAR2(88),
         "SEC_REF_42" NUMBER,
         "SECN_42" VARCHAR2(88),
         "SEC_REF_43" NUMBER,
         "SECN_43" VARCHAR2(88),
         "SEC_REF_44" NUMBER,
         "SECN_44" VARCHAR2(88),
         "SEC_REF_45" NUMBER,
         "SECN_45" VARCHAR2(88),
         "SEC_REF_46" NUMBER,
         "SECN_46" VARCHAR2(88),
         "SEC_REF_47" NUMBER,
         "SECN_47" VARCHAR2(88),
         "SEC_REF_48" NUMBER,
         "SECN_48" VARCHAR2(88),
         "SEC_REF_49" NUMBER,
         "SECN_49" VARCHAR2(88),
         "SEC_REF_50" NUMBER,
         "SECN_50" VARCHAR2(88),
         "SEC_REF_51" NUMBER,
         "SECN_51" VARCHAR2(88),
         "SEC_REF_52" NUMBER,
         "SECN_52" VARCHAR2(88),
         "SEC_REF_53" NUMBER,
         "SECN_53" VARCHAR2(88),
         "SEC_REF_54" NUMBER,
         "SECN_54" VARCHAR2(88),
         "SEC_REF_55" NUMBER,
         "SECN_55" VARCHAR2(88),
         "SEC_REF_56" NUMBER,
         "SECN_56" VARCHAR2(88),
         "SEC_REF_57" NUMBER,
         "SECN_57" VARCHAR2(88),
         "SEC_REF_58" NUMBER,
         "SECN_58" VARCHAR2(88),
         "SEC_REF_59" NUMBER,
         "SECN_59" VARCHAR2(88),
         "SEC_REF_60" NUMBER,
         "SECN_60" VARCHAR2(88),
         "SEC_REF_61" NUMBER,
         "SECN_61" VARCHAR2(88),
         "SEC_REF_62" NUMBER,
         "SECN_62" VARCHAR2(88),
         "SEC_REF_63" NUMBER,
         "SECN_63" VARCHAR2(88),
         "SEC_REF_64" NUMBER,
         "SECN_64" VARCHAR2(88),
         "SEC_REF_65" NUMBER,
         "SECN_65" VARCHAR2(88),
         "SEC_REF_66" NUMBER,
         "SECN_66" VARCHAR2(88),
         "SEC_REF_67" NUMBER,
         "SECN_67" VARCHAR2(88),
         "SEC_REF_68" NUMBER,
         "SECN_68" VARCHAR2(88),
         "SEC_REF_69" NUMBER,
         "SECN_69" VARCHAR2(88),
         "SEC_REF_70" NUMBER,
         "SECN_70" VARCHAR2(88),
         "SEC_REF_71" NUMBER,
         "SECN_71" VARCHAR2(88),
         "SEC_REF_72" NUMBER,
         "SECN_72" VARCHAR2(88),
         "SEC_REF_73" NUMBER,
         "SECN_73" VARCHAR2(88),
         "SEC_REF_74" NUMBER,
         "SECN_74" VARCHAR2(88),
         "SEC_REF_75" NUMBER,
         "SECN_75" VARCHAR2(88),
         "SEC_REF_76" NUMBER,
         "SECN_76" VARCHAR2(88),
         "SEC_REF_77" NUMBER,
         "SECN_77" VARCHAR2(88),
         "SEC_REF_78" NUMBER,
         "SECN_78" VARCHAR2(88),
         "SEC_REF_79" NUMBER,
         "SECN_79" VARCHAR2(88),
         "SEC_REF_80" NUMBER,
         "SECN_80" VARCHAR2(88),
         "SEC_REF_81" NUMBER,
         "SECN_81" VARCHAR2(88),
         "SEC_REF_82" NUMBER,
         "SECN_82" VARCHAR2(88),
         "SEC_REF_83" NUMBER,
         "SECN_83" VARCHAR2(88),
         "SEC_REF_84" NUMBER,
         "SECN_84" VARCHAR2(88),
         "SEC_REF_85" NUMBER,
         "SECN_85" VARCHAR2(88),
         "SEC_REF_86" NUMBER,
         "SECN_86" VARCHAR2(88),
         "SEC_REF_87" NUMBER,
         "SECN_87" VARCHAR2(88),
         "SEC_REF_88" NUMBER,
         "SECN_88" VARCHAR2(88),
         "SEC_REF_89" NUMBER,
         "SECN_89" VARCHAR2(88),
         "SEC_REF_90" NUMBER,
         "SECN_90" VARCHAR2(88),
         "SEC_REF_91" NUMBER,
         "SECN_91" VARCHAR2(88),
         "SEC_REF_92" NUMBER,
         "SECN_92" VARCHAR2(88),
         "SEC_REF_93" NUMBER,
         "SECN_93" VARCHAR2(88),
         "SEC_REF_94" NUMBER,
         "SECN_94" VARCHAR2(88),
         "SEC_REF_95" NUMBER,
         "SECN_95" VARCHAR2(88),
         "SEC_REF_96" NUMBER,
         "SECN_96" VARCHAR2(88),
         "SEC_REF_97" NUMBER,
         "SECN_97" VARCHAR2(88),
         "SEC_REF_98" NUMBER,
         "SECN_98" VARCHAR2(88),
         "SEC_REF_99" NUMBER,
         "SECN_99" VARCHAR2(88),
         "SEC_REF_100" NUMBER,
         "SECN_100" VARCHAR2(88),
         "SEC_REF_101" NUMBER,
         "SECN_101" VARCHAR2(88),
         "SEC_REF_102" NUMBER,
         "SECN_102" VARCHAR2(88),
         "SEC_REF_103" NUMBER,
         "SECN_103" VARCHAR2(88),
         "SEC_REF_104" NUMBER,
         "SECN_104" VARCHAR2(88),
         "SEC_REF_105" NUMBER,
         "SECN_105" VARCHAR2(88),
         "SEC_REF_106" NUMBER,
         "SECN_106" VARCHAR2(88),
         "SEC_REF_107" NUMBER,
         "SECN_107" VARCHAR2(88),
         "SEC_REF_108" NUMBER,
         "SECN_108" VARCHAR2(88),
         "SEC_REF_109" NUMBER,
         "SECN_109" VARCHAR2(88),
         "SEC_REF_110" NUMBER,
         "SECN_110" VARCHAR2(88),
         "SEC_REF_111" NUMBER,
         "SECN_111" VARCHAR2(88),
         "SEC_REF_112" NUMBER,
         "SECN_112" VARCHAR2(88),
         "SEC_REF_113" NUMBER,
         "SECN_113" VARCHAR2(88),
         "SEC_REF_114" NUMBER,
         "SECN_114" VARCHAR2(88),
         "SEC_REF_115" NUMBER,
         "SECN_115" VARCHAR2(88),
         "SEC_REF_116" NUMBER,
         "SECN_116" VARCHAR2(88),
         "SEC_REF_117" NUMBER,
         "SECN_117" VARCHAR2(88),
         "SEC_REF_118" NUMBER,
         "SECN_118" VARCHAR2(88),
         "SEC_REF_119" NUMBER,
         "SECN_119" VARCHAR2(88),
         "SEC_REF_120" NUMBER,
         "SECN_120" VARCHAR2(88),
         "SEC_REF_121" NUMBER,
         "SECN_121" VARCHAR2(88),
         "SEC_REF_122" NUMBER,
         "SECN_122" VARCHAR2(88),
         "SEC_REF_123" NUMBER,
         "SECN_123" VARCHAR2(88),
         "SEC_REF_124" NUMBER,
         "SECN_124" VARCHAR2(88),
         "SEC_REF_125" NUMBER,
         "SECN_125" VARCHAR2(88),
         "SEC_REF_126" NUMBER,
         "SECN_126" VARCHAR2(88),
         "SEC_REF_127" NUMBER,
         "SECN_127" VARCHAR2(88),
         "SEC_REF_128" NUMBER,
         "SECN_128" VARCHAR2(88),
         "SEC_REF_129" NUMBER,
         "SECN_129" VARCHAR2(88),
         "SEC_REF_130" NUMBER,
         "SECN_130" VARCHAR2(88),
         "SEC_REF_131" NUMBER,
         "SECN_131" VARCHAR2(88),
         "SEC_REF_132" NUMBER,
         "SECN_132" VARCHAR2(88),
         "SEC_REF_133" NUMBER,
         "SECN_133" VARCHAR2(88),
         "SEC_REF_134" NUMBER,
         "SECN_134" VARCHAR2(88),
         "SEC_REF_135" NUMBER,
         "SECN_135" VARCHAR2(88),
         "SEC_REF_136" NUMBER,
         "SECN_136" VARCHAR2(88),
         "SEC_REF_137" NUMBER,
         "SECN_137" VARCHAR2(88),
         "SEC_REF_138" NUMBER,
         "SECN_138" VARCHAR2(88),
         "SEC_REF_139" NUMBER,
         "SECN_139" VARCHAR2(88),
         "SEC_REF_140" NUMBER,
         "SECN_140" VARCHAR2(88),
         "SEC_REF_141" NUMBER,
         "SECN_141" VARCHAR2(88),
         "SEC_REF_142" NUMBER,
         "SECN_142" VARCHAR2(88),
         "SEC_REF_143" NUMBER,
         "SECN_143" VARCHAR2(88),
         "SEC_REF_144" NUMBER,
         "SECN_144" VARCHAR2(88),
         "SEC_REF_145" NUMBER,
         "SECN_145" VARCHAR2(88),
         "SEC_REF_146" NUMBER,
         "SECN_146" VARCHAR2(88),
         "SEC_REF_147" NUMBER,
         "SECN_147" VARCHAR2(88),
         "SEC_REF_148" NUMBER,
         "SECN_148" VARCHAR2(88),
         "SEC_REF_149" NUMBER,
         "SECN_149" VARCHAR2(88),
         "SEC_REF_150" NUMBER,
         "SECN_150" VARCHAR2(88),
         "SEC_REF_151" NUMBER,
         "SECN_151" VARCHAR2(88),
         "SEC_REF_152" NUMBER,
         "SECN_152" VARCHAR2(88),
         "SEC_REF_153" NUMBER,
         "SECN_153" VARCHAR2(88),
         "SEC_REF_154" NUMBER,
         "SECN_154" VARCHAR2(88),
         "SEC_REF_155" NUMBER,
         "SECN_155" VARCHAR2(88),
         "SEC_REF_156" NUMBER,
         "SECN_156" VARCHAR2(88),
         "SEC_REF_157" NUMBER,
         "SECN_157" VARCHAR2(88),
         "SEC_REF_158" NUMBER,
         "SECN_158" VARCHAR2(88),
         "SEC_REF_159" NUMBER,
         "SECN_159" VARCHAR2(88),
         "SEC_REF_160" NUMBER,
         "SECN_160" VARCHAR2(88),
         "SEC_REF_161" NUMBER,
         "SECN_161" VARCHAR2(88),
         "SEC_REF_162" NUMBER,
         "SECN_162" VARCHAR2(88),
         "SEC_REF_163" NUMBER,
         "SECN_163" VARCHAR2(88),
         "SEC_REF_164" NUMBER,
         "SECN_164" VARCHAR2(88),
         "SEC_REF_165" NUMBER,
         "SECN_165" VARCHAR2(88),
         "SEC_REF_166" NUMBER,
         "SECN_166" VARCHAR2(88),
         "SEC_REF_167" NUMBER,
         "SECN_167" VARCHAR2(88),
         "SEC_REF_168" NUMBER,
         "SECN_168" VARCHAR2(88),
         "SEC_REF_169" NUMBER,
         "SECN_169" VARCHAR2(88),
         "SEC_REF_170" NUMBER,
         "SECN_170" VARCHAR2(88),
         "SEC_REF_171" NUMBER,
         "SECN_171" VARCHAR2(88),
         "SEC_REF_172" NUMBER,
         "SECN_172" VARCHAR2(88),
         "SEC_REF_173" NUMBER,
         "SECN_173" VARCHAR2(88),
         "SEC_REF_174" NUMBER,
         "SECN_174" VARCHAR2(88),
         "SEC_REF_175" NUMBER,
         "SECN_175" VARCHAR2(88),
         "SEC_REF_176" NUMBER,
         "SECN_176" VARCHAR2(88),
         "SEC_REF_177" NUMBER,
         "SECN_177" VARCHAR2(88),
         "SEC_REF_178" NUMBER,
         "SECN_178" VARCHAR2(88),
         "SEC_REF_179" NUMBER,
         "SECN_179" VARCHAR2(88),
         "SEC_REF_180" NUMBER,
         "SECN_180" VARCHAR2(88),
         "SEC_REF_181" NUMBER,
         "SECN_181" VARCHAR2(88),
         "SEC_REF_182" NUMBER,
         "SECN_182" VARCHAR2(88),
         "SEC_REF_183" NUMBER,
         "SECN_183" VARCHAR2(88),
         "SEC_REF_184" NUMBER,
         "SECN_184" VARCHAR2(88),
         "SEC_REF_185" NUMBER,
         "SECN_185" VARCHAR2(88),
         "SEC_REF_186" NUMBER,
         "SECN_186" VARCHAR2(88),
         "SEC_REF_187" NUMBER,
         "SECN_187" VARCHAR2(88),
         "SEC_REF_188" NUMBER,
         "SECN_188" VARCHAR2(88),
         "SEC_REF_189" NUMBER,
         "SECN_189" VARCHAR2(88),
         "SEC_REF_190" NUMBER,
         "SECN_190" VARCHAR2(88),
         "SEC_REF_191" NUMBER,
         "SECN_191" VARCHAR2(88),
         "SEC_REF_192" NUMBER,
         "SECN_192" VARCHAR2(88),
         "SEC_REF_193" NUMBER,
         "SECN_193" VARCHAR2(88),
         "SEC_REF_194" NUMBER,
         "SECN_194" VARCHAR2(88),
         "SEC_REF_195" NUMBER,
         "SECN_195" VARCHAR2(88),
         "SEC_REF_196" NUMBER,
         "SECN_196" VARCHAR2(88),
         "SEC_REF_197" NUMBER,
         "SECN_197" VARCHAR2(88),
         "SEC_REF_198" NUMBER,
         "SECN_198" VARCHAR2(88),
         "SEC_REF_199" NUMBER,
         "SECN_199" VARCHAR2(88),
         "SEC_REF_200" NUMBER,
         "SECN_200" VARCHAR2(88),
         "SEC_REF_201" NUMBER,
         "SECN_201" VARCHAR2(88),
         "SEC_REF_202" NUMBER,
         "SECN_202" VARCHAR2(88),
         "SEC_REF_203" NUMBER,
         "SECN_203" VARCHAR2(88),
         "SEC_REF_204" NUMBER,
         "SECN_204" VARCHAR2(88),
         "SEC_REF_205" NUMBER,
         "SECN_205" VARCHAR2(88),
         "SEC_REF_206" NUMBER,
         "SECN_206" VARCHAR2(88),
         "SEC_REF_207" NUMBER,
         "SECN_207" VARCHAR2(88),
         "SEC_REF_208" NUMBER,
         "SECN_208" VARCHAR2(88)
       );Thank you for any help/advice.
    Elie
    Edited by: EEG on Jun 12, 2011 2:09 PM

    So, is there some limit to the number of columns one can have in a "classic" or interactive report?Yes. See Oracle® Application Express Application Builder User's Guide Release 4.0, Appendix B: Oracle Application Express Limits.
    Any idea why I would be getting this error?No, but I've replicated it in APEX 4.0.2.00.07 on 11.2.0.1.0 EE using a table of 420 <tt>varchar2(88)</tt> columns:
    >
    ORA-20503: Current version of data in database has changed since user initiated update process. current checksum = "50C9BDC0AA1AEF0EB272E9158B2117B4" application checksum = "0"
    >
    Happens whether using <tt>select *</tt> or including all column names in the query. (I know you don't want to type all the column names, but I'd never use <tt>select *</tt> in a production application: always use a proper column list. You can get one without typing by drag-and-drop of a table in most IDEs, or a query from <tt>user_tab_columns</tt>.)
    I hit the problem at 274 columns. Such an arbitrary number leads me to think that the problem is not one of the number of columns per se, but is due to some other limit (possibly a 32K VARCHAR2/RAW buffer somewhere).
    Workaround:
    Updates to the report column attributes are actually being saved, and you can navigate them using the Page Definition tree view as described in Appendix B.
    Getting More Help:
    This is probably a bug. If you have a support agreement with Oracle raise an SR with Oracle Support.
    Also:
    <li>Search the forum using the "ORA-20503" code and other possible terms to see if there's anything relevant. I had a quick look but the only thread in this context recommended an upgrade on an Oracle 9 DB version that's not compatible with APEX 4.0.
    <li>To get the attention of the Oracle APEX team or anyone else who may know more about this problem than we do, edit your original post and change the Subject to be more specific about the actual nature of the problem: <em>BUG? APEX 4.0: ORA-20503 error editing report with 400+ columns</em>, and include your database version/edition and the definition of the <tt>PVTAB</tt> table.
    Finally:
    Somebody's bound to ask, so we might as well get started:
    <li>Why so many columns?
    <li>What requirement is this trying to fulfil?

  • [bug?] playback issues with QT browser plugin, any workarounds?

    I've noticed a playback issue with media files using the quicktime plugin (currently v7.1.3) with Firefox 1.5.0.x and now Firefox 2.0 on Windows XP (SP2):
    Description:
    When I open a media file in a separate tab or window, audio playback cuts out when I switch focus away from the tab or window (i.e. when it runs in the background). When I put the tab/window in the foreground again, playing resumes.
    Reproducible: Always
    Steps to Reproduce:
    1. Open an mp3 file with quicktime plugin in a new tab
    2. Switch focus to some other tab or firefox window
    Expected Results:
    Audio playback should continue in the background.
    Actual Results:
    Audio playback cuts out and resumes once you click on the tab containing the plugin or if you hover over it with the cursor. The cutting and resuming out doesn't happen right away but after a short delay, suggesting that it might have something to do with thread prioritization.
    The file is completely buffered by quicktime, this is not about connection speed or bandwidth issues.
    Opening the link to the same file with right-click->Open Link in External Application works well.
    Someone else has filed a bug report against Firefox about this issue here:
    https://bugzilla.mozilla.org/show_bug.cgi?id=339449
    I wouldn't be surprised if this turns out not to be an issue with Gecko but with QuickTime.
    Can other users confirm this on XP? Is this a known QuickTime bug? Are there fixes or workarounds?
    Windows XP

    Yeah I had that thought as well but no joy. It works for the player but it doesn't solve my plugin problem. It really seems to be something about prioritizing, the way the audio cuts in and out - notice how it takes a few fractions of a second before a process loses its CPU cycles when you click away from it? You can observe this with System Monitor on the Mac or Linux or process explorer on XP. That's how the audio is cutting out and back in. I still think it's probably a bug/'feature' either with Gecko, the Firefox browser engine, or with the QT plugin..

  • Lightroom Photo Border Bug ?

    I have an issue where my Lightroom seems to display images correctly but once it sends them to the printer they get cropped differently. I did some digging around with the numbers and came to the conclusion that this is probably a bug or a feature that is implemented backwards. Basically what seems to create the problem is the "Photo Border" feature. When you add a border the image in Lightroom gets scaled down to compensate and that is what Lightroom displays. When printing however, the image is scaled back up to the size of the frame and the border is now masking it. Here are some images to demonstrate this:
    The original Image:
    The way Lightroom displays it when the border is added:
    The way Lightroom prints it:
    What actually goes on (masking instead of scaling to compensate for the border):
    Has anyone else run into this or can confirm that they get the same behaviour on their end ?

    I'm not in front of my lightroom machine right now to check, but I would expect the display of the software to indicate what will actually be printed. The image is scaled down to conpensate for the border as far as the lightroom display goes, but it's not what the pdf preview before printing or the actual print reflects.

  • OGrid CollapseLevel Bug

    Hi guys, I have a question which I seems to me like it is most probably a bug. If you have a Grid (not a Matrix) and you set oGrid.CollapseLevel = 1 then the following problem occurs. When you use
    a1 = oGrid.DataTable.GetValue(4, 5).ToString();
    oGrid.DataTable.SetValue(6, 5, a1);
    On some occasions this gets and sets the wrong rows. For example this may set and get row 3 instead of 5. If the CollapseLevel is 0 then it works fine.
    Can anyone confirm this and is there a workaround?
    Thanks

    u039Au03B1u03BBu03B7u03BCέu03C1u03B1 u039Aώu03C3u03C4u03B1,
    In order to get the correct line of Grid when using collapselevel is the
    oGrid.GetDataTableRowIndex(pVal.row)..
    Hope it helps you,
    Kind regards
    George C.

  • System-wide blue to purple color switch bug

    I've been having a very confusing and inconsistent color issue which has persisted for years on both a PowerBook G4 and my new MacBook Pro using Panther and Tiger respectively. The MacBook is only two weeks old and still has a very clean system.
    Seemingly at random blue colors will switch to purple in random places in most apps. Restarting the computer used to solve the issue, but it no longer does - I am permanently stuck with a purple color scheme (screenshots further down).
    - The default unvisited link color in all apps has switched from blue to purple.
    - The finder icon has switched from blue to purple in both the dock and the command-tab switcher (but the icon is the correct color when viewed inside the CoreServices folder). All other blue dock icons are unaffected.
    - Some toolbar or sidebar images in most apps which were blue are now purple, but other blue icons have stayed blue.
    I have screenshots just to show that I'm not imagining this. Mail and the Finder use the same drag-and-drop sidebar highlight color. However, Mail has now taken on a purple theme:
    And here is the Finder icon as seen in the dock, and then as an alias in the Finder:
    Here is what I have tried thus far (nothing has worked):
    - Logging out and in
    - Logging out and in while holding shift (safe login)
    - Restarting the computer
    - Restarting the computer while holding shift (safe boot)
    - Repairing permissions and restarting
    - Zapping the PRAM (command-option-p-r)
    - Creating a new user account and logging into it
    - Running daily, weekly, and monthly cron scripts using Onyx
    - Deleting all caches for applications, fonts, system, and kernel
    - Pulled everything out of the main Library (Macintosh HD/Library/), cleared caches, and restarted
    From this I've concluded that it's probably a bug or corrupted file in the core system since safe boot, clean user accounts, and a bare library didn't fix the problem. The obvious solution would be to re-install the OS, but I'm afraid that the problem would just happen again since it has been occurring with multiple versions of the OS on two different computers.
    Does anyone have any ideas?

    I should add that this has been a slowly developing problem, one which is only getting worse. The first effect was a system-wide change in the default unvisited link color. Restarting the computer solved the problem whenever it would occur (about twice a day). The next to be affected was image colors, and once this happened for the first time, restarting the computer no longer fixed the problem.

Maybe you are looking for

  • Scanning Multiple Pages HP 8500A

    Hi, I want to scan multiple pages into one pdf document.  When I hit scan on the printer, it will scan and then send to my computer.  It won't give me the option to add another page to the pdf.   I need to know how to rectify this as I have many page

  • My iPod shuffle is dead - new one!! HELP

    Hi!My NEW shuffle doesn't work.IT is dead!I used it as it is written in rules and I listened to the music.In the middle of playing just dropped dead.NO signal, no blinking light, NOTHING!Then I couldn't wake up, it didn't react on USB connection with

  • Plugins Failed to Load after AE Update

    After updating to 11.0.2.12 I receive this error when starting After Effects. XDCAMH.prm ImporterFastMPEG.prm ImporterMPEG.prm All three failed to load.

  • [SOLVED] [wine, bumblebee] /usr/lib/libturbojpeg.so missing

    Hi together, i cannot start games using wine and optirun anymore. Wine complains about the file /usr/lib/libturbojpeg.so is missing although it is there. Maybe optirun redirects to another lib instead which is actually missing. Yesterday it worked so

  • Banner wont display in apple mail signature

    I have inserted code in mail>preferences>signature but it just displays as code in the message. Here however, it shows as a banner. This is the code/banner