Caching RAW and LONG RAW objects

Hi,
Is there any way to cache RAW and LONG RAW object like BLOB caching?
Thanks

Is there any way to cache RAW and LONG RAW object like BLOB caching?What is the version?
to fetch long fetch size character of bytes you must use any one of below three.
1)Primary key
2)ROWID
3)Unique columns

Similar Messages

  • Migration of LONG and LONG RAW datatype

    Just upgraded a DB from 8.1.7.4 to 10.2.0.1.0. In the post-upgrade tasks, it speaks of migrating tables with LONG and LONG RAW datatypes to CLOB's or BLOB's. All of my tables in the DB with LONG or LONG RAW datatypes are in the sys, sysman, mdsys or system schemas (as per query of dba_tab_columns). Are these to be converted? Or, does Oracle want us to convert user data only (user_tab_columns)?

    USER_TAB_COLUMNS tells you the columns in the tables owned by the current user. There may well be many users on your system that you created that contain objects. I suppose you could log in to each of those schemas and query their USER_TAB_COLUMNS table, but it's probably easier to query DBA_TAB_COLUMNS with an appropriate WHERE clause on the owner of the objects.
    Justin

  • Counter negative cache hits and long searches

    hi my name is sumit.
    In my network there is a frame-relay pvc on 7206 VXR router on which 2 T1's are terminated(from S8700 EPABX) and the bandwidth of this PVC is 600 KBPS. we are using rtp and tcp header compression technique(codec g729r8).7206 router is connected to IGX through HSSI cable and there is a IGX to IGX connectivity from INDIA to US. At US site there is a same network. and it is further connected to AT&T cloud. mainly on this PVC there is inbound traffic which comes from US AT&T cloud.
    we are facing a lot of voice quality issues and when i use show frame relay ip rtp header-compression.
    the counter negative cache hits and long searches rapidly increases
    can somebody help me to solve the problem.
    and tell me is there any affect on voice quality if these counters rapidly increases

    hi Neo thanks for reply.
    r u from Datacraft ?
    I am also from Datacraft
    i have already used this URL. but it gives only one statement about counters.
    but i am asking what could be an impact of these counters on voice ?
    please reply
    thanks

  • Affect of negative cache hits and long searches

    hi my name is sumit.
    In my network there is a frame-relay pvc on 7206 VXR router on which 2 T1's are terminated(from S8700 EPABX) and the bandwidth of this PVC is 600 KBPS. we are using rtp and tcp header compression technique(codec g729r8).7206 router is connected to IGX through HSSI cable and there is a IGX to IGX connectivity from INDIA to US. At US site there is a same network. and it is further connected to AT&T cloud. mainly on this PVC there is inbound traffic which comes from US AT&T cloud.
    we are facing a lot of voice quality issues and when i use show frame relay ip rtp header-compression.
    the counter negative cache hits and long searches rapidly increases
    can somebody help me to solve the problem.
    and tell me is there any affect on voice quality if these counters rapidly increases

    Is there any one to answer this query..........
    any update .............

  • M-RAW and S-RAW of Canon 7D with 5.6

    Hello,
    as I see, unfortunately version 5.6 doesn't really support M-RAW and S-RAW of Canon 7D. DNG files for M-RAW will inflate about 50% in size of the original M-RAW file (18MB -> 28MB). The only intention to shoot in M-RAW is the smaller size of files.
    Best regards
    Herbie

    function(){return A.apply(null,[this].concat($A(arguments)))}
    herbie0815 wrote:
    > 2. The mRaw and sRaw files are not raw any more; they are demosaiced in such
    > a format, which can not be stored identically in DNG. The conversion to DNG has
    > to "expand" the original data, thus the result will be much larger than the CR2,
    > even if compressed.
    Hmm, I'm not satisfied about this explanation. Maybe I don't understand the technical side, but as I said, the M-RAW has got 10MP and the normal RAW has got 18MP. I really expect that the DNG from M-RAW is smaller than the one from normal RAW...
    Well, if you don't understand the technical side, then you don't have any basis to expect a certain behavior, which reflects purely technical aspects. Anyway, I try to give a brief explanation of it.
    The Bayer type raw data has only one value per pixel, that is the "red", "green" or "blue" raw value (which is not identical to the red, green or blue value from RGB). The mRaw and sRaw are demosaiced, each pixel has three components, red, green and blue. The red and blue components are the same for two neighbouring pixels, therefor they are not stored twice. (Think of this: the sRaw has only one quarter as many pixels as the normal raw, but the file size is closer to the half of the size of the normal raw). However, DNG does not support storing the data this way, therefor the DNG converter has to create a file, in which all pixels have all three components; thus the DNG contains much more data than the respective sRaw or mRaw.
    Gabor

  • JDBC and Long RAW datatypes

    I have written a Java/JDBC program that grabs jpeg images. I would like to store them in an Oracle database in a LONG RAW column (for legacy databases b4 BLOBs). Can someone show me some sample code that would allow me to do this??? All help is GREATLY appreciated.
    TIA
    RHC

    From http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements001.htm#i54330
    >
    RAW(size)
    Raw binary data of length size bytes. Maximum size is 2000 bytes. You must specify size for a RAW value.
    LONG RAW
    Raw binary data of variable length up to 2 gigabytes.

  • Large and long lived objects, heap vs. NIO?

    Hi,
    Our application keeps large number of objects in cache (based on Oracle Coherence) and objects are related to subscriber information which means they are supposed to stay in cache for very long time.
    The heap size we are looking at is about 3+ GB and probably the majority of the heap will be used to hold the cached objects. Is there anyway to let JRockit GC know about the 'cache' concept so it won't spend too much effort on these 'cached' objects?
    Any JRockit tuning tips for this kind of application?
    Is NIO a better approach means use the NIO to hold the cache objects rather than heap?
    But every NIOed object also consumes some heap (create Java object) at least in the Sun HotSpot implementation, not sure what is the trade-off. What is JRockit standing point on NIO given its superior GC to Sun HotSpot, because JRockit's 'guaranteed pausetime' and be able to large heap size at the same time?
    NIO seems to allow go beyond the traditional problem long pause concern when heap size is large.
    Our application is supposed to handle at least several hundred GB in distributed configuration (cluster based on Oracle Coherence).
    Is this thing JRockit may or may not have advantage/disadvantage?
    We heard about 'distributed GC' in distributed environement meaning the dependency on other JVM may cause one local GC to depend on the remote GC on the remote node if GC happens at the same time on different nodes. Any thought from JRockit perspective?
    Regards,
    Jasper

    Thanks a lot, Stefan
    I appreciate the extra miles you went beyond just the JRockit...
    One clarification in my original question, actually I mean a large number of objects rather than large-sized object. The objects range from 50 bytes to a few kb and the total number of objects are lilley in several millions.
    We have been testing Heap vs. NIO based on Sun JVM.
    But we are not able to see clear advantages one over the other at this point.
    -Using NIO does use less heap but the overall process memory between HEAP and NIO is not much different in our case.
    -Our original thinking to use NIO is because we want to cache more than 2+GB of objects per JVM so we can make good use of standard server configurations (8/16 cores 32/64G RAM) without huge number of JVMs. We're targeting 100+ million subscribers in the telecom space. The 2 GB is basically the limit that normal GC (HotSpot) probably can handle without causing long pause, otherwise it will break our SLA (latency requirement is in the ranage of milliseconds).
    -From reading the HotSpot JVM, it turns out using NIO is not totally HEAP-free meaning, additional book-keeping for the NIO resident objects are created in the form of Java objects which consumes heap too. So GC is not totally immuned. Also there are some more overheads in handling the NIO objects (create/delete) via the JNI calls.
    -Even though we are not using NIO directly, we use Coherence and Coherence provides the HEAP and NIO options for the cached objects, but ultimately it's up to JVM. Therefore I try to bring this up with JRockit. I think the NIO size limit is probably due to one direct memory allocation can be up to 'int' can cover, but if one allocation call is not big enough, multiple allocation calls should do (probably more complex/effort...). But 'int' is up to 4GB and is already good enough in our case for a JVM caching objects in NIO.
    -Our ultimate decision is based on TCO, low CPU/memory with high throughput and low latency meet the SLAs.
    We will come back to share our test results based on JRockit RealTime later and may ask for more of your insights then.
    Best Regards,
    Jasper

  • Can we use BLOB instead of LONG RAW in JMSStore

    (Oracle 9i, Weblogic 8.1.2)
              We are putting in place a Dataguard environment (or standby database). In such environment, prod data is copied to another database in 'pseudo' real time. Unfortunately some 'old' datatypes are not supported by Dataguard. In JMS tables %JMSSTORE, the field RECORD is defined as LONG RAW and LONG RAW is one of the unsupported datatypes.
              Can we alter those tables in order to use BLOB instead of LONG RAW ?
              Regards,
              Bao Nguyen
              

    Hi Bao,
              Answers in-line:
              Bao Nguyen wrote:
              > (Oracle 9i, Weblogic 8.1.2)
              >
              > We are putting in place a Dataguard environment (or standby database). In such environment, prod data is copied to another database in 'pseudo' real time. Unfortunately some 'old' datatypes are not supported by Dataguard. In JMS tables %JMSSTORE, the field RECORD is defined as LONG RAW and LONG RAW is one of the unsupported datatypes.
              >
              > Can we alter those tables in order to use BLOB instead of LONG RAW ?
              Not supported in 8.1. A supported Oracle BLOB capability will be
              available in the next release. I can think of two
              possible work-arounds:
              (1) The following might work, but is not currently supported by
              BEA: Manually create the table with a BLOB type and use
              an Oracle OCI or BEA type IV driver. Definitely do NOT use
              an Oracle thin driver, as data corruption may result.
              (2) I recall that another customer had a replication product
              they were able to get working with LONG RAW by modifying
              the table definition so that the table's handle index was a primary key.
              I do not remember the name of the product.
              (The latter modification is supported for certain releases
              now - but I think this usage must be confirmed with customer support.)
              Tom
              >
              > Regards,
              >
              > Bao Nguyen
              >
              

  • LONG RAW Columns and  Replication Set-up

    we are working to set-up a replicated environment for all of
    our Oracle Applications.
    I could not get a clear understanding which Version Oracle will
    support BLOB/CLOB/LONG RAW replication support or
    Whether we can plan for Replicating such kind of Applications.
    I read from one of Oracle Press book - "Oracle Backup &
    Recovery" it is documented
    that Oracle doesn't replication support for columns that use
    BLOBs,CLOBs ( Page
    no. 434 )
    As one of our application was designed using LONG RAW Column, I
    was wondering
    about carrying the existing LONG RAW column to be replicated
    like a CLOB/BLOB, if oracle supports Replication of BLOB/CLOB.
    It will be of great help, if you can provide some insight in the
    complexity of
    having BLOBs in the applications to go further on making our
    efforts to have a
    Replicated environment set-up.
    Thanking you in anticipation.
    Bhanu Prakash
    < [email protected]>

    1) LONG and LONG RAW have been depricated since 8i so you shouldn't be using them ever for anything.
    2) LONG and LONG RAW don't even have decent support to be manipulated from PL/SQL so there is essentially no SQL support.
    3) It would be very rare that you would have anything to index in a LONG or a LONG RAW from a functionality standpoint. You're not likely, for example, to want to store more than 4k of data in a LONG and then do things like search for strings that start with a particular phrase. You're very very unlikely to want to search a binary LONG RAW to look for rows where the binary data starts with a particular string of bytes. You'd potentially want to be able to use Oracle Text on a LONG field to search for particular words and phrases in the text but I'm not sure that existed prior to LONGs being depricated.
    Justin

  • How to view contents in Long Raw datatype column

    Hi,
    We have two node RAC database with 10.2.0.4.0 version.
    OS - IBM AIX.
    We have a table with a column with datatype "LONG RAW" in production. It stores image files.
    We need to send the images from few rows to third party vendor. Basically, they need to view the images.
    Earlier, I have exported to dump file using datapump and sent to vendor. but vendor is telling that they are not able to view the images. Can you please suggest best method to transfer the images (LONG RAW datatype) and the method to view them.

    We have a table with a column with datatype "LONG RAW" in production. It stores image files.
    We need to send the images from few rows to third party vendor. Basically, they need to view the images.
    Earlier, I have exported to dump file using datapump and sent to vendor. but vendor is telling that they are not able to view the images. Can you please suggest best method to transfer the images (LONG RAW datatype) and the method to view them.How is the vendor trying to use the extracted images? Data exported with datapump must be imported into another database with datapump. The same applies to the exp utility (must use imp to load into a database).
    If you're careful you should be able to write a binary file using utl_file.
    Regarding the long raw, is there any way you could convert to BLOBS? Longs and Long raws are notoriously hard to work with

  • Updating a LONG RAW field in Oracle DB from SQL Server

    Hello Experts,
    I need to be able to update a LONG RAW (binary, it looks like the SQL Server type equivalent is Image or varbinary(max)) field which resides on an Oracle server and is connected to the SQL Server 2012 via a Linked Server.
    I can retreive data in general, just trying to find how to deal with LONG RAW.
    Thanks!
    CB

    Hello,
    It seems that the Long datatypes in Oracle have a lot of restrictions. According to
    this blog:LONG and LONG RAW columns cannot be used in distributed SQL statements.
    In that case, you should update the long raw column on the Oracle side. You can try to use openquery as Rick post above to send the SQL statment to Oracle and execute.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Error when querying long raw

    Hi,
    I am using developer 6i with oracle 10.2.0.4 on win 2008.
    I created a table as below:
    create table image_table (filename varchar2(255) primary key, image long raw);
    i created a trigger in forms to upload a image and store the link in the database. The image is stored in a directory.
    The image link is saved in image_table.
    But when i query the table, i get the below error
    SQL> select * from image_table;
    ERROR:
    ORA-00932: inconsistent datatypes
    no rows selected
    When i query the table in TOAD, i get the output as
    FILENAME, IMAGE
    ateeq, (BLOB)
    Please suggest how to solve this problem.
    Thanks,

    the LONG and LONG RAW datatype have been deprecated in Oracle 8.0, in 1998, so 14 years ago!
    Is there any sound reason why you can't use a BLOB?
    a LONG ROW column can not be displayed directly, so a
    select * from image_table
    where one of the columns is a LONG RAW is expected not to work,
    and if you would have read documentation (which you never do, I remember you from previous doc questions), you would have known and not have asked Yet Another Redundant Question!
    Sybrand Bakker
    Senior Oracle DBA

  • BLOB or LONG RAW ?

    Hi,
    Who can explain to me what differences are between BLOB Format
    and LONG RAW format ?? Which between the twice is better to store
    picture and sound in a Oracle database.
    It's for a application project in Developer V6 on the web.
    Thanks,
    Bart
    null

    Bart (guest) wrote:
    : Hi,
    : Who can explain to me what differences are between BLOB
    Format
    : and LONG RAW format ?? Which between the twice is better to
    store
    : picture and sound in a Oracle database.
    : It's for a application project in Developer V6 on the web.
    : Thanks,
    : Bart
    I would recommend to use blobs. This are the new datatypes in
    Oracle8, they can store up to 4 GB (lang raw 2 GB); in addition,
    lang raw are only for backward compatibility (see you oracle 8
    documentation). Furthermore, you can use the dbms_lob package on
    LOB datatypes to manipulate lob data in the database.
    peter
    null

  • Updating a LONG RAW column

    I have a table with a column of type LONG RAW that can take binary content of arbitrary length (up to 2 GB). I try to copy content from one row to another using the following SQL:
    UPDATE TEAM_ADM.Content SET (Content, ContentType) =
    (SELECT Content, ContentType FROM Content WHERE ContentId = in_SourceContentID)
    WHERE ContentID = in_TargetContentID;
    Content.Content is the column in question.
    Oracle returns with error ORA-00997:
    ORA-00997 illegal use of LONG datatype
    Cause: A value of datatype LONG was used in a function or in a DISTINCT, WHERE, CONNECT BY, GROUP BY, or ORDER BY clause. A LONG value can only be used in a SELECT clause.
    Action: Remove the LONG value from the function or clause.
    Question: How can I copy a LONG RAW column from one row to another?
    Regards,
    Kjell Tangen

    Hello,
    It seems that the Long datatypes in Oracle have a lot of restrictions. According to
    this blog:LONG and LONG RAW columns cannot be used in distributed SQL statements.
    In that case, you should update the long raw column on the Oracle side. You can try to use openquery as Rick post above to send the SQL statment to Oracle and execute.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Mapping CLOB and Long in xml schema

    Hi,
    I am creating an xml schema to map some user defined database objects. For example, for a column which is defined as VARCHAR2 in the database, I have the following xsd type mapping.
    <xsd:element name="Currency" type="xsd:string" />
    If the oracle column is CLOB or Long(Oracle datatype), could you please tell me how I can map it in the xml schema? I do not want to use Oracle SQL type like:
    xdb:SQLType="CLOB" since I need a generic type mapping to CLOB. Would xsd:string still hold good for CLOB as well as Long(Oracle datatype) ?
    Please help.
    Thanks,
    Vadi.

    The problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
    rs = stmt.executeQuery("select myLong, myNumber from tab");
    while (rs.next()) {
    int n = rs.getInt(2);
    String s = rs.getString(1);
    The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
    Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
    This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
    Douglas

Maybe you are looking for