Wrong metadata retrieved when using Oracle8.1.7 + classes12.zip

Hi,
I am using Oracle8.1.7+classes12.zip+jdk1.2.2.
I am always retrieving the wrong metadata for the tables. The same piece of code works fine when using Oracle8.1.5+classes111b.zip+jdk1.2.2.
The code is as follows :
try {
db = Database.getDatabase();
con = db.getConnection();
ps = con.prepareStatement("SELECT * FROM " + tablename);
meta = (ps.executeQuery()).getMetaData();
} catch(SQLException e) {
Debug.log("ServerMetaCache::requestMeta " + e);
System.err.println("ServerMetaCache::requestMeta " + e);
return null;
The following is a list of the column name retrived from the metadata :
TABLE_CAT
TABLE_SCHEM
TABLE_NAME
COLUMN_NAME
KEY_SEQ
pk_name
Anyone encountered this before?
null

The code I posted above was incomplete.
The complete code is as shown below:
db = Database.getDatabase();
con = db.getConnection();
ps = con.prepareStatement("SELECT * FROM " + tablename);
meta = (ps.executeQuery()).getMetaData();
try {
int count = meta.getColumnCount();
columns = new ColumnDefinition[count];
System.out.println("tname = " + tname);
System.out.println("column count = " + count);
for (int i = 0; i < count; i++) { // arrays starts from 0 while columns start from 1
String colName = meta.getColumnName(i+1); // get the database name for this column
System.out.println("colName = " + colName);
catch(SQLException e) {
System.err.println(e.toString());
The result is :
tname = Product
column count = 6
colName = TABLE_CAT
colName = TABLE_SCHEM
colName = TABLE_NAME
colName = COLUMN_NAME
colName = KEY_SEQ
colName = PK_NAME
null

Similar Messages

  • Trim mode 'wrong frame' problems when using merged clips

    Hello there,
    We are having a problem with the trim mode on FCP 6.0.4. We're working with 1080psf 24 Apple ProRes 422 material, captured using a AJA Kona card.
    The sound and picture were recorded separately, and synced manually (by creating merged clips) in FCP.
    When we go into the trim window, it shows a different frame as the incoming or outgoing frame on some clips. In other words it's not actually showing us the real frame before or after the cut. Instead it shows a frame that's half a second back, for example. When we preview the cut, it cuts in the 'right' place, but when we pause and try to fine cut the transition using +1 keys (or trim back/forward keys), it shows us the wrong frame. When we preview it again, the cut is still in the 'right' place - not the frame represented when it's stilled. This makes using trim mode nigh-on impossible since it's not actually showing you the frames you're cutting.
    Are there any patches or workarounds to fix this issue? I can't find any fixes on forums/help sites or the FCP manual.
    Tried trashing prefs, repairing disk permissions and all the usual maintenance stuff!
    Exporting new Quicktimes of the merged clips and then re-importing them is not an option, as they need to be timecode accurate to be recaptured for the online and vfx work.
    Any advice would be much appreciated - if no solutions exist we'll have to switch everything over to Avid, and that'd be a mammoth job.
    Many thanks in advance!

    You may not be zoomed in enough in the Timeline window, the more you zoom in, the more detailed edit you can perform. Zoom out too much, and you won't have the sensitivity.

  • Bug report & possible patch: Wrong memory allocation when using BerkeleyDB in concurrent processes

    When using the BerkeleyDB shared environment in parallel processes, the processes get "out of memory" error, even when there is plenty of free memory available. This results in possible database corruption.
    Typical use case when this bug manifests is when BerkeleyDB is used by rpm, which is installing an rpm package into custom location, or calls another rpm instance during the installation process.
    The bug seems to originate in the env/env_region.c file: (version of the file from BDB 4.7.25, although the culprit code is the same in newer versions too):
    330     /*
    331      * Allocate room for REGION structures plus overhead.
    332      *
    333      * XXX
    334      * Overhead is so high because encryption passwds, replication vote
    335      * arrays and the thread control block table are all stored in the
    336      * base environment region.  This is a bug, at the least replication
    337      * should have its own region.
    338      *
    339      * Allocate space for thread info blocks.  Max is only advisory,
    340      * so we allocate 25% more.
    341      */
    342     memset(&tregion, 0, sizeof(tregion));
    343     nregions = __memp_max_regions(env) + 10;
    344     size = nregions * sizeof(REGION);
    345     size += dbenv->passwd_len;
    346     size += (dbenv->thr_max + dbenv->thr_max / 4) *
    347         __env_alloc_size(sizeof(DB_THREAD_INFO));
    348     size += env->thr_nbucket * __env_alloc_size(sizeof(DB_HASHTAB));
    349     size += 16 * 1024;
    350     tregion.size = size;
    Usage from the rpm's perspective:
    The line 346 calculates how much memory we need for structures DB_THREAD_INFO. We allocate structure DB_THREAD_INFO for every process calling db4 library. We don't deallocate these structures but when number of processes is greater than dbenv->thr_max then we try to reuse some structure for process that is already dead (or doesn't use db4 no longer). But we have DB_THREAD_INFOs in hash buckets and we can reuse DB_THREAD_INFO only if it is in the same hash bucket as new DB_TREAD_INFO. So line 346 should contain:
    346     size += env->thr_nbucket * (dbenv->thr_max + dbenv->thr_max / 4) *
    347         __env_alloc_size(sizeof(DB_THREAD_INFO));
    Why we don't encounter this problem earlier? There are some magic reserves as you can see on line 349 and some other additional space is created by alligning to blocks. But if we have two processes running at the same time and these processes end up in the same hash bucket and we repeat this proces many times to fill all hash buckets with two DB_THREAD_INFOs then we have 2 * env->thr_nbucket(37) = 74 DB_THREAD_INFOs, which is much more than dbenv->thr_max(8) + dbenv->thr_max(8) / 4 = 10 and plus allocation from dbc_put, we are out of memory.
    And how we will create two processes that end up in the same hash bucket. We can start one process (rpm -i) and then in scriptlet we start many processes (rpm -q ...) in loop and one of them will be in the same hash bucket as the first process (rpm -i).
    I would like to know your opinion on this issue, and if the proposed fix would be acceptable.
    Thanks in advance for answers.

    The attached patch for db-4.7 makes two changes:
      it allows enough for each bucket to have the configured number of threads, and
      it initializes env->thr_nbuckets, which previously had not been initialized.
    Please let us know how it works for you.
    Regards,
    Charles

  • Wrong album art when using PC library

    I have a new iPad 2. I also have iTunes running on a PC.
    On the PC, all of the album art is correct, but when I connect over Wi-Fi to my iTunes library on the PC from my iPad using the Music app, some of the albums have the wrong art work. Most are correct but some albums (which works out to quite a few) show the art work to other albums in my library.
    Also, when I connect to my iCloud library from the iPad, all of the albums have the correct artwork. So, the problem only occurs when I connect to my PC's library from the iPad.
    Any ideas how to fix it?
    Is there a way to force the Music app to forget what it thinks it knows about my PC library so that it'll have to start from scratch?
    I'm using iTunes Match on both the PC and the iPad.
    I've searched the web and found some other people complaining of the same thing but always with different scenarios and the rare times a solution is suggested it doesn't seem to apply to my scenario.
    iTunes v.10.5.1.42, on PC (Win 7, x64)
    iPad v.5.0.1

    I think I just figured out one way of forcing the Music app to start over with the library.  Log into a different account, reboot, and start the Music app.  Then log back into the correct account, reboot again, start the Music app, and connect to the PC library.  It should start from scratch.
    Perhaps there is a simpler way to accomplish the same thing.
    The album art appears to be correct now.

  • JMF, Linux, wrong codec selected when using jarred version of app

    I have problem realising an AVI player when I try and run Linux JMF application when whole application is bundled into jar file and I try and run it using
      java -cp <..JMF/lib/jmf.jar:/home/codroe/fobs4jmf.jar..> -jar app.jarIn jmf.log I see:
    ## Player created: com.sun.media.content.unknown.Handler@1cd2e5f
    ##   using DataSource: com.sun.media.protocol.file.DataSource@911f71
    ## Building Track: 0
    ## Input: XVID, 720x576, FrameRate=25.0, Length=1244160 68 extra bytes
    !! Failed to handle track 0
    XX   Unable to handle format: XVID, 720x576, FrameRate=25.0, Length=1244160 68 extra bytesThis is exactly the message I usually get when some part of JMF cannot find "jmf.properties" file (if for example could not find .jmfdir), but looking at strace from application run it does appear JMF found my jmf.properties this time, but still chose not to use omnvidea codec:
    stat64("jmf.properties", 0xfeffc410)    = -1 ENOENT (No such file or directory)
    stat64("/home/codroe/JMF-2.1.1e/lib/jmf.properties", {st_mode=S_IFREG|0644, st_size=31354, ...}) = 0
    open("/home/codroe/JMF-2.1.1e/lib/jmf.properties", O_RDONLY|O_LARGEFILE) = 11When I run my unbundled (i.e. not wrapped up in jar) version of Linux application (same classpath), it finds jmf.properties no problem, and choses omnivoidea codec and now is able to play the video with good things in jmf log:
    ## Player created: com.sun.media.content.unknown.Handler@30c221
    ##   using DataSource: com.omnividea.media.protocol.file.DataSource@a401c2
    ## Building Track: 0
    ## Input: FFMPEG_VIDEO, 720x576, FrameRate=25.0, Length=414720 0 extra bytesMy windows version works fine in both configurations, jarred or not jarred.
    From Linux strace log I it shows my working non-JARed version has selected omnividea codec:
    gettimeofday({1100876842, 175149}, NULL) = 0
    stat64("/home/codroe/media/protocol/file/DataSource.class", 0xfeffc384) = -1 ENOENT (No such file or directory)
    stat64("/home/codroe/JMF-2.1.1e/lib/media/protocol/file/DataSource.class", 0xfeffc384) = -1 ENOENT (No such file or directory)
    gettimeofday({1100876842, 179815}, NULL) = 0
    stat64("/home/codroe/media/protocol/file/DataSource.class", 0xfeffc384) = -1 ENOENT (No such file or directory)
    stat64("/home/codroe/JMF-2.1.1e/lib/media/protocol/file/DataSource.class", 0xfeffc384) = -1 ENOENT (No such file or directory)
    gettimeofday({1100876842, 182979}, NULL) = 0
    gettimeofday({1100876842, 183206}, NULL) = 0
    gettimeofday({1100876842, 184371}, NULL) = 0
    gettimeofday({1100876842, 184591}, NULL) = 0
    stat64("/home/codroe/com/omnividea/media/protocol/file/DataSource.class", 0xfeffc384) = -1 ENOENT (No such file or directory)
    stat64("/home/codroe/JMF-2.1.1e/lib/com/omnividea/media/protocol/file/DataSource.class", 0xfeffc384) = -1 ENOENT (No such file or directory)
    gettimeofday({1100876842, 191005}, NULL) = 0but as I don't see this in trace from JAR version, and because in JMF log it says:
    ## Player created: com.sun.media.content.unknown.Handler@30c221
    ##   using DataSource: com.omnividea.media.protocol.file.DataSource@a401c2I believe it has not selected correct codec for a reason I don't understand.
    [I checked I only have one version of jmf.properties on system].
    Any ideas why jarred version is different from non-jarred version?
    version information:
    # JMF Version 2.1.1e
    ## Platform: Linux, i386, 2.6.5-1.358
    ## Java VM: Sun Microsystems Inc., 1.5.0
    and using fobs4jmf from http://fobs.sourceforge.net

    correction, for the last bit I should have said, for non-working jarred version in JMF log I see:
    ## Player created: com.sun.media.content.unknown.Handler@1cd2e5f
    ##   using DataSource: com.sun.media.protocol.file.DataSource@911f71indicating wrong codec selected

  • "Wrong" material number when using manufacturer part number

    Hello gurus,
    when I create a new material, first the system gives me a new material number according to the given number range of the material type, let's say 100000002. So far, so good.
    But when I enter a manufacturer part number (e.g. 123456789) and press enter, the material number changes to 00000000000000000000000000000000100000002. And when I save the material master, the number is 123456789, allthough that number is not even allowed concerning the number range.
    What's wrong here? Is this a bug or a feature? In my view the manufacturer part number should never overwrite our own material number!
    Thanks
    Alicia

    What Material type you are Using for Creating Manufacturer Parts?? Its HERS and the Number Range for HERS Material type is Internal.
    If you Want to Maintain externally then you have to Define new external no range and assign to HERS.
    When you Select HERS you can see the Basic Text and Purchasing Views alone. In that You have to enter Desc of Mfg part and Input Original Material no.
    Here is the Process for Manufacturer Part Profile :
    Create External No ranges for Vendor Account Group MNFR and save.
    Now Create Each Make / Manufacturer like ABB as one Vendor in XK01 Trxn code.
    Create Material Code original say XXXX in MM01 with ROH material type. (for eg Transistor which is available with ABB & Seimens make)
    Create material code for ABB make say 11with Material type HERS which contain only two views (Basic Data & Purchasing) and give Make, Original material No & Material group and other details and save.
    Create material code for Seimens make say 12with Material type HERS which contain only two views (Basic Data & Purchasing) and give Make, Original material No & Material group and other details and save.
    Create a Purchase order with original Material XXXX, then it will ask for mnfg part profile then select the material and click F4 on it it will show you the created mfg part nos like ABB 11 & seimes 12 and select one and save.
    Prerequisite is you have to Activate Mfg part profile in OMPN trxn code
    And in Material master Purchasing View Select the mfg part profile and save
    Even if you set the Mfg part number in Display mode in SPRO settings system will throw the same error.
    Now What can be done is :
    Either go to ME21N and Select material tab after inputting and press f4 and select Manufacturer part number options then system will showe the list of mfg part number maintained to that material select the desired and save.
    Else go to MM02 Purchasing View and in Mfg part Profile tab there must be a four charecter profile assigned to field Remove this and save.
    The Second option is not a good practice.

  • Any way to skip a metadata field when using "Import Maps" via Archiver?

    Typically when you import an archive, you want to keep both the file & the metadata. However, I have a scenario where I need to keep the existing metadata value.
    For example,
    If the content archive has "ABC" for dDocAccount, and the environment I'm importing to has "XYZ" for dDocAccount, I want it to remain at "XYZ". The native file, as expected, is imported.
    If there's a way to skip ALL metadata importing, that would work for me too. I just need to import the file. Before you ask, no, I cannot do a simple check out/in. This has to be done in Archiver.
    Thanks, and I'll award points :)

    Using dDocAccount was really a poor example choice, but I chose to use that instead since it's very common, and everybody knows about it. The actual field I'm trying to maintain is xWebsites for Site Studio (but no matter which field I mention, I still need the same effect). I don't want to use value maps because then I'll have to create a billion archives...one for every combination of possible values the metadata value that's in the system. That'd be overkill, compared to just 1 archive.
    The best I've come up with so far is to override the "UPDATE_BYREV" service and remove the "2:Umeta::0:null" service action. This does what I'm looking for, but that'll obviously affect all checkins (and other core functionality), and I simply want it to be done during an archive replication.
    And yes, I know about URLs and accounts ... good thing I never use those poorly-designed user-unfriendly "web location" URLs.
    Thanks though.

  • Wrong views Selected when using LSMW + Recording

    Hi,
    I use LSMW + Recording method to extend warehouse management view, and it works fine in the DEV system, but when the LSME projrct was imported into QA system, the system selected wrong views at the begining when running this LSMW object, the WM 1 and WM 2 view should be selected, but in QA system, instead, only Accounting view is selected, what's the reason? Kindly please advise.
    Thanks.

    The reason is the dynamic on the view selection.
    Depending on the already maintained other views (like sales and purchasing and storage location) the position of WM 1+2 changes.
    Instead of recording MM01 you should record MMZ1 as this has a static place for WM views.
    But even better is to use report RLMG0020.

  • Wrong week day when using Calendar

    Today is the 14. of May which is a Monday. Calendar.MONDAY = 2 and therefore :
    Calendar rightNow = Calendar.getInstance();
    int weekday = rightNow.get(Calendar.DAY_OF_WEEK);
    System.out.println(weekday);should print 2, which it also does. But when I want to see which weekday the first of June is I do:
    rightNow.add(Calendar.MONTH, 1);
    weekday = rightNow.get(Calendar.DAY_OF_WEEK);
    System.out.println(weekday);But it prints 5. According to my physical calendar the first of June is a Friday and 6 should therefore have been printed. What is wrong?

    Did you try the following?
    rightNow.set(Calendar.DAY_OF_MONTH, 1);
    rightNow.add(Calendar.MONTH, 1);

  • Wrong date format when using selection screen query

    Hi all,
    I have a problem in a report when usign the selection screen of the query.
    the system has been upgraded recently from a 3.5 to 7.0. when a query is run in the bex web the user can put in the selection date needer to run the query.
    currently if u select a month using the selection screen next to the input form, the month will show up in the input field
    as 006 09 (006space09) instead of 06.2009 for the selection of june. 
    does anyone know how to fix this? it was working good using the 3.5 version of the bex web.
    Any help apriciated

    Using the list cube transaction and using the selection sceen and selecting the month it does put the right selection in the input field. ive also just tested it using de bx excel plugin and using the selection screen the correct value is set in the input field.  so i think it more of a bex we b problem but i dont know where to start the search for the solution.

  • BPC Office Client : No data retrieval when using static dimensions

    Hello,
    We have used static dimensions to build reports / input schedules, but the data range remains empty when refreshing the workbook after the column - and row key ranges have been filled. There is definitely data in the application for the combination of the current view members and range members chosen. I have put 3 dimensions in the row key range and 4 dimensions in the column key range.
    What are the common solutions for this problem ? I don't receive any errors while refreshing the workbook.

    Have you checked security member access profiles to know that the user has access to that slice of data?
    You might also double-check your data in BW by running the Shared Query Engine test program UJQ_SQE_TEST from transaction SA38 or SE38.  Just select your AppSet and Application, along with matching in the filters the static members you've set in your report.
    If you still have issue, try to replicate the issue with a delivered template report and confirm no data is shown.
    Otherwise, please create a customer [message|http://service.sap.com/message] in the SAP Service Marketplace and report your issue as a potential bug. To expedite a solution it would be helpful if you detail the steps exactly for the Support team to recreate, verify and troubleshoot the problem.
    Best regards,
    [Jeffrey Holdeman|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/profile/jeffrey+holdeman]
    SAP BusinessObjects
    Enterprise Performance Management
    Regional Implementation Group

  • Wrong reply from, when using multiple email addresses

    Apple Mail has this strange behavior. Maybe it is a bug.
    I have 3 email addresses in Mail: [email protected], [email protected] and [email protected]
    [email protected] is my primary email address, the other ones are from my co-workers.
    Sometimes I get an email with my email address (among others) in the CC field and the other two in the TO field. When I press the Reply All button, Mail generates a new message. But instead of sending it from the [email protected] account, it tries to send it from the first email account what it found in the TO field of the original message. And it does not reply to any of the email addresses in the CC field of the original message.
    I tried this with different domains. Same result. Tried it by forcing Mail sending mail from my account only. Same result.
    What is going on here? Is it a bug?

    Normally mail would create a new email using the top email address in your list of accounts. For example, in in preferences you accounts are listed like this:
    [email protected]
    [email protected]
    [email protected]
    then normally new messages would be sent from the top email address.

  • Wrong mouse pointer when using VNC

    Hi.
    I have installed Arch Linux v3.7.8 on to a VMWare virtual machine and configured VNC for remote access.
    However, when accessed via VNC (using KRDC) and I open firefox for example I don't get a proper mouse pointer that allows me to select links, buttons or menus. The only pointer I get is a double headed arrow (left/right).
    If I do the same action via the vSphere Client console everything works as expected.
    Is anyone familiar with this and able to help me solve this please?
    Bill

    Gusar wrote:If only KRDC has the issue and other clients don't (did you try others besides vSphere?), then the issue is in KRDC. So report it to them.
    Hi Gusar,
    Thanks for the reply. It's been tested with Windows VNC Client and that too has the same issue as KRDC so I'm suspecting it's a VNC Server (configuration) issue. So not to sure how to proceed.
    Bill

  • Password "wrong" or irretrievable when using Screen Sharing

    Strangely, I tried using Screen Sharing on the finder to access my iMac from my Macbook today and was asked to enter my password. I presume that this is my system password that I use to log in to my computer. It is the same for both. However, no dice...
    Furthermore, I can't find this password in Keychain...also to no avail. I have no idea how to get things right so that I can access Screen Sharing.
    Thanks in advance for any ideas you may have.

    Start with Finder->Help->Mac Help->search for *Screen Sharing* and peruse the many hits.

  • Java.lang.OutOfMemoryError using JDBC thin driver (classes12.zip)

    I am facing a strange situation trying to reading an Oracle User Defined Type.
    First I have defined the following UDT:
    TYPE FILE_COMP AS OBJECT
    NOME_FILE VARCHAR2(1024)
    , FLAG_COMPRESS CHAR(1)
    Then I wrote a Java class that implements the CustomDatum, CustomDatumFactory interfaces in order to read FILE_COMP columns from the database.
    When I try to read data from the table I get the following error:
    Exception occurred during event dispatching:
    java.lang.OutOfMemoryError:
    at oracle.jdbc.oracore.UnpickleContext.read_bytes(UnpickleContext.java:72)
    at oracle.jdbc.oracore.UnpickleContext.apply_patches_per_type(UnpickleContext.java:159)
    at oracle.jdbc.oracore.UnpickleContext.apply_patches(UnpickleContext.java, Compiled Code)
    at oracle.jdbc.oracore.OracleTypeADT.unpickle(OracleTypeADT.java:728)
    at oracle.jdbc.oracore.OracleTypeADT.unpickle(OracleTypeADT.java:684)
    at oracle.jdbc.oracore.OracleTypeADT.unlinearize(OracleTypeADT.java:665)
    at oracle.sql.StructDescriptor.toArray(StructDescriptor.java:275)
    at oracle.sql.STRUCT.getOracleAttributes(STRUCT.java:250)
    at oracle.jpub.runtime.MutableStruct.getLazyDatums(MutableStruct.java, Compiled Code)
    at oracle.jpub.runtime.MutableStruct.getAttribute(MutableStruct.java:76)
    at sctra.ValConfig.FileCompDatum.getNomeFile(FileCompDatum.java)
    at sctra.ValConfig.FileCompRenderer.getTableCellRendererComponent(FileCompRenderer.java, Compiled Code)
    at javax.swing.JTable.prepareRenderer(Unknown Source)
    at javax.swing.plaf.basic.BasicTableUI.paintCell(Unknown Source)
    at javax.swing.plaf.basic.BasicTableUI.paintRow(Unknown Source)
    at javax.swing.plaf.basic.BasicTableUI.paint(Unknown Source)
    at javax.swing.plaf.ComponentUI.update(Unknown Source)
    at javax.swing.JComponent.paintComponent(Unknown Source)
    at javax.swing.JComponent.paint(Unknown Source) ...
    The strange thing is that all works fine if I run the same code against the test database (I have two databases, one for tests and one for operations).
    The two databases are identical (same version, same content).
    The only difference is that the operation database is 64 bit.

    sounds like you have a memory leak
    i doubt MySQL has anything to do with it
    make sure you are releasing database and other
    resources properly?
    if that's not the case, maybe you are literally
    piling up too many records in memory? how many
    records are you dealing with?Hi,
    thanks for your answer. At this moment I found my (dumb ;-) mistake.
    I tried to release database resources properly but it did not work because of a NullPointerException.
    Regards
    Hendrik

Maybe you are looking for