Efficient copy of a BLOB in DB

Dear forte users,
I have a DB table containing a binary large object (BLOB), and I
want to make a copy of a certain row. For efficiency I want to avoid
loading the huge binary data (approx.1.8Mb) into forte and then save it
to DB again. I tried to place a select statement after the insert into,
and the although the same construct works for simple data types, it does
not work for long raw:
ORA-00997: illegal use of LONG datatype
Does anybody have a workaround? Any ideas are appreciated...
Rgds,
Tamas Deak
Tamas Deak
Lufthansa Systems Hungary
(forte developer)
2-6 Mazsa ter, Budapest, 1107, HUNGARY
(36-1) 4312 973
[email protected]
[email protected][email protected]-

Dear A-K!
Please have a look at the following example:
conn pm/pm
desc print_media
SELECT dbms_lob.getlength(ad_photo)
FROM print_media;this example is from [http://www.psoug.org/reference/dbms_lob.html].
Yours sincerely
Florian W.

Similar Messages

  • How to copy one column BLOB value into another column of another database.

    How to copy one column BLOB value into another column of another database.
    BLOB value contains word document.
    I thought of copy the BLOB value into a text file and then update the new column value by the same text in textfile. Will this work?
    Is there any other better way to do this?

    You're welcome
    BLOB fields contains binary data. I don't think you can do this
    Also if I view the BLOB as text. Can I copy it and insert into the new database.
    I think your options are as I said. Datapump or CTAS
    Best Regards

  • Copy CLOB to BLOB

    Hi,
    Oracle 9.2.0.8
    How can I copy a CLOB to a BLOB ?
    Thanks

    HI,
    This sample may help u.
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    2 -- transforming CLOB â BLOB
    3 l_off number default 1;
    4 l_amt number default 4096;
    5 l_offWrite number default 1;
    6 l_amtWrite number;
    7 l_str varchar2(4096 char);
    8 begin
    9 begin
    10 loop
    11 dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    12
    13 l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    14 dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    15 utl_raw.cast_to_raw( l_str ) );
    16
    17 l_offWrite := l_offWrite + l_amtWrite;
    18
    19 l_off := l_off + l_amt;
    20 l_amt := 4096;
    21 end loop;
    22 exception
    23 when no_data_found then
    24 NULL;
    25 end;
    26 end;
    27 /
    Procedure created.
    SCOTT@orcl_11g> show errors
    No errors.
    SCOTT@orcl_11g> CREATE TABLE test_tab
    2 (id     NUMBER,
    3      clob_col CLOB,
    4      blob_col BLOB)
    5 /
    Table created.
    SCOTT@orcl_11g> INSERT INTO test_tab (id, clob_col, blob_col)
    2 VALUES (1, 'This was entered in the clob column.', EMPTY_BLOB())
    3 /
    1 row created.
    SCOTT@orcl_11g> SET NULL Null
    SCOTT@orcl_11g> SELECT clob_col, UTL_RAW.CAST_TO_VARCHAR2 (blob_col) AS blob_data
    2 FROM test_tab
    3 /
    CLOB_COL
    BLOB_DATA
    This was entered in the clob column.
    Null
    SCOTT@orcl_11g> DECLARE
    2 v_clob CLOB;
    3 v_blob BLOB;
    4 BEGIN
    5 SELECT clob_col INTO v_clob FROM test_tab WHERE id = 1;
    6 SELECT blob_col INTO v_blob FROM test_tab WHERE id = 1 FOR UPDATE;
    7 clob2blob (v_clob, v_blob);
    8 END;
    9 /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11g> SELECT clob_col, UTL_RAW.CAST_TO_VARCHAR2 (blob_col) AS blob_data
    2 FROM test_tab
    3 /
    CLOB_COL
    BLOB_DATA
    This was entered in the clob column.
    This was entered in the clob column.
    SCOTT@orcl_11g>
    Reference :-
    http://kr.forums.oracle.com/forums/thread.jspa?messageID=2702149
    Regards,
    Simma...

  • Howto efficiently Copy array into data portal

    I have an array in VBS memory and would like to create an channel in the data-portal based on that data-set, I currently use this code construct to copy the data:
    ' intChannelCount is the number of samples
    'MaxHold is the data array to be copied
    'oFFTMaxHold is the data channel that is the target of the data
    for counter = 1 to intChannelCount
          oFFTMaxHold.Values(counter)=MaxHold(counter)
    next
     It is rather slow (and if the data preview is open in the data portal) even much slower.
    Is there a method to copy the whole array in one operation to a channel?
    Ton
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!
    Solved!
    Go to Solution.

    Hi Ton,
    If you have DIAdem 10.2 or later you can use the following command to do this with one command-- passing the array of values as one of the command's parameters-- this is a common situation when querying records from a data base that you want to load into the Data Portal as channels.
    ChanRefsArray = ArrayToChannels(ValuesArray, ChanNamesArray)
    In the case of your example below, that would be:
    Call ArrayToChannels(MaxHold, Array(oFFTMaxHold.Name))
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • Copy BLOB column  to filesystem

    Hello everybody.
    Im working with RedHat Linux and Oracle Database 10gR2.
    My problem , I have some tables with Blob columns those Blob columns have pictures (jpg, gif etc) , and I need copy those pics ,blob colums, to filesystem.
    Question : Can anybody send me a link where I can see a example of how to ?.
    Thanks in advanced and regards everybody.

    You can check this asktom post
    Retrieving Data from BLOB into a file
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6379798216275
    or search for more demo on asktom.

  • Blobs  uploading & downloading

    OJSP Utility tags <fileaccess: httpUpload> & <fileaccess:httpDownload> use the fileaccess.sql script to create a file access table in the database for uploading and downloading of blobs, would someone tell me if this is creating a blob in a file system outside of the database (or what is also called an external lob)?
    I heard that it's more efficient to put the blob into a DB instead of a file system outside the DB, am I right?
    Thanks a bunch!

    OJSP Utility tags <fileaccess: httpUpload> & <fileaccess:httpDownload> use the fileaccess.sql script to create a file access table in the database for uploading and downloading of blobs, would someone tell me if this is creating a blob in a file system outside of the database (or what is also called an external lob)?
    I heard that it's more efficient to put the blob into a DB instead of a file system outside the DB, am I right?
    Thanks a bunch!

  • Blob to filesystem

    How do you copy an internal blob to the filesystem?

    Two options:
    OCI
    JDBC
    (I'm assuming binary data)
    null

  • Please help : here a few questions about BLOBs

    I'm using Forms6i patch11. I've got a web application with a block based on a table like this :
    TMP_STORED_FILE(file_name varchar2(256),
    file blob)
    The application allow the user to select a picture on the hard disk, and to store it in the database.
    I use the read_image_file to show the picture in an Image item (based on the file column of the table).
    When the user perform a commit, I'm not sure that the file is correctly stored in the table... what do
    you think ?
    Now, I want to show in the image item the picture stored in the table when a record is queried. I saw many java sample code supposed to do that, but do I really have to use java ??
    I saw that if I want to show the picture, I have first to re-create the file, and then to use read_image_file again. So : why should I store the pictures ? I mean : I wanted to store the pictures so that I can control
    the access to them. I wanted to delete the files after they have been stored. But if I have to recreate them,
    what about keep the files on the server and never store anything but the pathname and the filename ???
    At least, I could control the security at the directory level, what do you think ?

    Yes, thanks. I tried to commit the pictures yesterday, and I thought it was ok, but in fact nothing was
    inserted in my blob. I thought it was ok, because I found in the metalink the following :
    fix: To read the image stored in BLOB column of some database table into an image item in forms block,
    the item should have property 'Database Item'= No Image items can only be populated in two ways:
    1. with read_image_file built-in In this case file needs to be read from a local filesystem (or from the middle- tier for webforms)
    Or
    2. directly from database table by fetching In this case the image item must be based on database column of BLOB or LONG RAW
    datatype, and cannot be a control item. There is no interface how to manipulate value of an image item programmatically.
    Possible solutions are:
    1. get file from BLOB column in database into local file system as some temporary file, then use read_image_file.
    <Note:66312.1> describes the steps for BFILEs, it needs to be modified for BLOB columns.
    Or
    2a) create temporary table with BLOB column, copy image from BLOB storage table into it using stored procedure with DBMS_LOB package.
    <Note:61737.1> describes using DBMS_LOB package in detail. In forms create a separate block based on this temporary table,
    only with image item visible, and display just copied image
    Or
    2b) have separate block based on BLOB storage table, only the image item visible, and always restrict where clause to select only the one
    particular image. The second way is probably easier to implement.
    Case 2a could be slower than the first one or 2b.
    Maybe I didn't understand it correctly (my english is not so good after all).
    Anyway, I finally succeed in inserting my pics.
    Thanks again !!

  • I have TB 24.6.0 and I deleted the menu or tool bar from my address book and I can't figure out how to get it back.

    I am trying to figure out how to efficiently copy a contact so I can add the contact information to an email. While experimenting I unchecked the menu bar or the tool bar from my address book and now I can't figure out how to get it back. So you could say I have two questions for which I would welcome suggestions:
    1. How do I get my menu/tool bar back into my address book?
    2. How can I efficiently copy and paste contact information?
    Thank you
    Rowdy1

    The alt key or F10 will reveal hidden toolbars. Then use View- Toolbars to turn them back on.
    Then press F9 when in a write window to turn on the contact sidebar for access to your address books.

  • How insert into table select from table works in jdbc driver?

    Hi, Supposing one table has two LOB fields, one LOB field is BLOB type while the other is CLOB type, I use the following sql statement to copy a row into a new row:
    insert into table (id, file_body, file_content) select new_id, file_body, file_content from table where id = '111';
    After commit on the connection, I can see the copied record in the table and both LOB fields are not null and this's the expected behavior.
    However after some days later, the copy function becomes to be a problem, the BLOB field named file_body can be null when the copy job is done, while other fields copy successfully.
    The issue can not be reproduced every time.
    I suppose the jdbc driver may try to allocate byte buffer in the heap to perform copy operation for BLOB fields,if there is no enough memory available in the heap the copy operation may fail but the commit on the connection can be successful.b/c I can see a lot of OOM errors in the log files and I believe this can contribute to the issue.
    Hope someone can give me a hint or comments.
    Thanks,
    SuoNayi

    I want to figure out what's memory leak point and I have tried the following solutions but none worked:
    1.I have tried to dump the memory of the JVM but failed,I can see the following errors :
    [root@localhost xxx]# jmap -heap:format=b 3027
    Attaching to process ID 3027, please wait...
    Debugger attached successfully.
    Server compiler detected.
    JVM version is 1.5.0_16-b02
    Unknown oop at 0x00002b21a24cd550
    Oop's klass is null
    Finding object size using Printezis bits and skipping over...
    Unknown oop at 0x00002b21a3634380
    Oop's klass is null
    Finding object size using Printezis bits and skipping over...
    2.and the thread stack can not be dumped successfully as well:
    Thread 3046: (state = BLOCKED)
    - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be imprecise)
    Error occurred during stack walking:
    the version of java is:
    java version "1.5.0_16"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b02)
    Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_16-b02, mixed mode)
    I have to added dump thread stack option in JVM arguments, -XX:+HeapDumpOnOutOfMemoryError.
    If there are other solutions please let me know, thanks.

  • Date Filtering With Equals and Time in Interactive Reports

    Using Apex 4.2.0.00.27
    I have a date column in an Interactive Report. The default column and action menu filters include things like "in the last 2 hours". If I remove the time component of the date this fails to find any records.
    If I leave the time component in the date then the user also has the option (via action->filter menu) to select a filter for "equals" and then this fails to find any records.
    I would like to keep the time component but also allow the filter for "equals a specific date".
    Greg

    adamt wrote:
    Thank you very much fac586!
    It's probably obvious I'm not very familiar with APEX (not HTML/CSS!) but managed to figure out a place to put what you said...If you are unfamiliar with the web technologies used in APEX, spend some time on the tutorials here: start with HTML, then XHTML, CSS, Javascript and the HTML DOM, and now HTML5.
    I appended the following in the template definition section for "Region without Buttons and Titles" and the Interactive Report looks how the users want now! I should probably take a copy of that template and only use it for the relevant IRs so that nothing else gets messed up.There are 3 ways to include CSS in web pages. The easiest is using an internal style sheet as you've done, but where you've put it is <a href="https://forums.oracle.com/forums/thread.jspa?messageID=10178616&#10178616">invalid according to the HTML specification</a>. The page HTML Header property in the APEX Page Attributes is the appropriate place for such style sheets. However ff you want to apply these styles globally across your application then that's not a scalable approach as it involves modifying every page with an IR. The standards-compliant approach is to modify the required page templates, either to include an external style sheet containing these overrides, or (more ambitiously and efficiently) copying and modifying the theme files in a different location and changing the page templates to reference them.
    <style type="text/css">
    .apexir_WORKSHEET_DATA th div {
    margin-left: 0;
    .apexir_WORKSHEET_DATA th,
    .apexir_WORKSHEET_DATA td {
    border: 1px solid #CCCCCC;
    </style>Always post code wrapped in tags<tt>\...\</tt> tags to preserve formatting and special characters.

  • EXEC_SQL connection error

    Hi,
    I'm trying to use EXEC_SQL to connect to a remote database to copy an image(BLOB) to my form.
    I've tested the code using dbms_sql access non-BLOB columns from the remote database then adapted those lines to EXEC_SQL to use from a different database BEFORE adding the access to a BLOB and getting error ... ORA 306500
    Error trapping using messages shows the connection line is the issue, so guessing my connection isn't working. As you can see below I've used string scott/tiger@remote_db but if my actual connection isn't working, is there a way I can test it and get a better message?
    I'm migrating from v6 to 10g forms and testing it in both.
    Thanks for your time :-)
    Elaine
    ==========Form Program Unit =====================
    PROCEDURE GET_DATAHUB_INFO_EXEC IS
    connection_id EXEC_SQL.CONNTYPE;
    cursorID EXEC_SQL.CURSTYPE;
    connect_str VARCHAR2(100);
    -- cursorID INTEGER;
    sqlstr VARCHAR2(1000);
    new_perid NUMBER;
    new_image BLOB;
    block_perid varchar2(10);
    nIgn PLS_INTEGER;
    BEGIN
         block_perid:=to_char(:images.person_id);
         connect_str:='scott/tiger@remote_db';
    -- a connection string is typically of the form 'username/password@database_alias'
    MESSAGE('before connect');
    connection_id := EXEC_SQL.OPEN_CONNECTION('connect_str');
    MESSAGE('before Open');
    cursorID := EXEC_SQL.OPEN_CURSOR(connection_id);
    -- cursorID := DBMS_SQL.OPEN_CURSOR;
    -- sqlstr := 'SELECT new.person_id, new.image FROM IBANK.IMAGES@remote_db new
    sqlstr := 'SELECT new.person_id FROM IBANK.IMAGES new
    WHERE new.person_id=to_number(:curr_perid)';
    -- DBMS_SQL.PARSE(cursorID, sqlstr,0);
    -- DBMS_SQL.BIND_VARIABLE(cursorID, ':curr_perid', block_perid);
    -- DBMS_SQL.DEFINE_COLUMN(cursorID, 1, new_perid);
    -- DBMS_SQL.DEFINE_COLUMN(cursorID, 2, new_image);
    MESSAGE('After opne');
    EXEC_SQL.PARSE(connection_id, cursorID, sqlstr, exec_sql.V7);
    EXEC_SQL.BIND_VARIABLE(connection_id, cursorID, ':curr_perid', block_perid);
    EXEC_SQL.DEFINE_COLUMN(connection_id, cursorID, 1, new_perid);
    MESSAGE('After Define');
    -- EXEC_SQL.DEFINE_COLUMN(connection_id, cursorID, 2, new_image);
    -- after parsing, and calling BIND_VARIABLE and DEFINE_COLUMN, if necessary, you
    -- are ready to execute the statement. Note that all information about the
    -- statement and its result set is encapsulated in the cursor referenced as cursorID.
    nIgn := EXEC_SQL.EXECUTE(connection_id, cursorID);
    -- nIgn := DBMS_SQL.EXECUTE(cursorID);
    LOOP
         --Fetch rows
    IF EXEC_SQL.FETCH_ROWS(connection_id, CursorId)=0 THEN
    -- IF DBMS_SQL.FETCH_ROWS(CursorId)=0 THEN
         EXIT;
    END IF;
    -- retrieve into variables
    EXEC_SQL.COLUMN_VALUE(connection_id, CursorId,1,new_perid);
    -- EXEC_SQL.COLUMN_VALUE(connection_id, CursorId,2,new_image);
    -- DBMS_SQL.COLUMN_VALUE(CursorId,1,new_perid);
    -- DBMS_SQL.COLUMN_VALUE(CursorId,2,new_image);
    :images.nbt_curr_person_id:=new_perid;
    END LOOP;
    EXEC_SQL.CLOSE_CURSOR(cursorID);
    -- DBMS_SQL.CLOSE_CURSOR(cursorID);
    EXCEPTION
    WHEN EXEC_SQL.Package_Error THEN
    -- code obtained from another discussion thread that doesn't work
    MESSAGE('Code: ' || EXEC_SQL.Last_Error_Code ||
    ' Msg: ' || EXEC_SQL.Last_Error_Mesg ||
    ' Position: ' || EXEC_SQL.Last_Error_Position );
    RAISE;
    WHEN OTHERS THEN
    -- close the cursor and raise error again
    EXEC_SQL.CLOSE_CURSOR(cursorID);
    -- DBMS_SQL.CLOSE_CURSOR(cursorID);
    RAISE;      
    END;
    ==========================

    Still not had an answer to this problem. So let's recap
    1) I'm trying to test accessing data from a different DB (my_newDB) in forms by using EXEC_SQL and creating a new connection for the data.
    [I eventually need to access an image as a BLOB so can't use a simple DB link but am testing without for the moment]
    2) I have condensed the code used and included it below. If I don't have an Exception section I get error:
    FRM-40735: Key-nxtrec trigger raised unhandled exception ORA-306500
    3) If I add the exception section :
    EXCEPTION WHEN EXEC_SQL.Package_Error THEN
    Declare vSQLcode number := SQLcode;
    vSQLerrm Varchar2(200) := SQLerrm;
    Begin
    If vSQLcode = -306500 then -- Exec_SQL.package_error
    vSQLcode := -abs(Exec_SQL.Last_Error_Code);
    vSQLerrm := Exec_SQL.Last_Error_Mesg;
    End if;
    Message('Except: ' || vSQLerrm );
    Message(' ',no_acknowledge);
    End;
    I then get
    Except: ORA-00000: normal, successful completion
    Note this fires at open_connection line and then no more of Program unit runs
    You can see my debug messages, so I know where it is.
    -------------------- condensed Program Unit - Fires on KEY-NXTREC and KEY-DOWN
    PROCEDURE GET_DATABASE_INFO IS
    bIsConnected BOOLEAN ;
    connection_id EXEC_SQL.CONNTYPE; -- holds the connection
    cursorID EXEC_SQL.CURSTYPE;
    connect_str VARCHAR2(100);
    sqlstr VARCHAR2(1000);
    new_perid NUMBER;
    block_perid varchar2(10);
    nIgn PLS_INTEGER;
    BEGIN
         block_perid:=to_char(:b_images.person_id);
         connect_str:='scott/tiger@my_newdb';
    MESSAGE('before connect');
    connection_id := EXEC_SQL.OPEN_CONNECTION('connect_str');
    MESSAGE('before Open');
    cursorID := EXEC_SQL.OPEN_CURSOR(connection_id);
    sqlstr := 'SELECT new.person_id FROM IBANK.IMAGES@my_newdb new
    WHERE new.person_id=to_number(:curr_perid)';
    MESSAGE('After open');
    EXEC_SQL.PARSE(connection_id, cursorID, sqlstr, exec_sql.V7);
    EXEC_SQL.BIND_VARIABLE(connection_id, cursorID, ':curr_perid', block_perid);
    EXEC_SQL.DEFINE_COLUMN(connection_id, cursorID, 1, new_perid);
    MESSAGE('After Define');
    nIgn := EXEC_SQL.EXECUTE(connection_id, cursorID);
    LOOP
         --Fetch rows
    IF EXEC_SQL.FETCH_ROWS(connection_id, CursorId)=0 THEN
         EXIT;
    END IF;
    -- retrieve into variables
    EXEC_SQL.COLUMN_VALUE(connection_id, CursorId,1,new_perid);
    Message('new_perid='||new_perid);
    :b0.nbt_curr_person_id:=new_perid;
    END LOOP;
    EXEC_SQL.CLOSE_CURSOR(connection_id, cursorID);
    exec_sql.close_connection(connection_id);
    END;

  • Variable Length

    Hi
    what is the maximum length a string variable can hold? say i want a variable zname to hold a string of 10,000 characters is it posible? kindly advice. if not what is the other way of doing it?

    Hiii Prabhu,
    though a string can generally store 256 char(Normally we use this .) still the following documents will clear ur doubt and show u the corect path,
    Open SQL supports three different types to store varying length character strings: VARCHAR, LONGVARCHAR and CLOB. Only columns of the VARCHAR type, whose maximum size is limited to 1000 characters, are allowed to occur in comparison expressions, ORDER BY clauses, and so on.
    From a functional point of view, LONGVARCHAR and CLOB columns behave the same, but the size restriction imposed on the LONGVARCHAR type allows this type to be mapped to an appropriate VARCHAR data type on each supported database platform. On most database platforms, this type is more efficient than the respective CLOB type.
    To store character strings with more than 1,333 characters, the CLOB type must be used.
    For portability reasons, Open SQL does not permit the storage of empty (zero-length) strings in any of these column types, because on some database platforms the empty string is always treated as a NULL value, which would lead to different query behavior on different database platforms. Furthermore, Open SQL forbids the storage of string values with trailing blanks in VARCHAR and LONGVARCHAR columns. This is because comparison semantics differs between blank-padded and non-padded semantics on the various database platforms. Blank-padded semantics compares two strings as equal if they differ in the number of trailing blanks only – that is, "ABC” equals “ABC “, whereas non-padded semantics compares two strings as equal only if they have equal length and no differing characters – that is, “ABC” is not equal to “ABC “.
    If an application needs to represent something like an "initial" value in a VARCHAR or LONGVARCHAR column, it is recommended that you use the string " " (a String object consisting of one single space). Although this is, strictly speaking, also a blank-padded string, it is accepted by Open SQL and can be treated in a portable way across all supported database platforms. In the case of CLOB columns, an initial value can and should be represented as a NULL value explicitly.
    Binary Strings
    Open SQL for Java supports the following data types to store binary strings:
    ·        BINARY
    ·        LONGVARBINARY
    ·        BLOB
    Only columns of the fixed BINARY type are comparable, whereas LONGVARBINARY and BLOB columns are not.
    From a functional point of view, LONGVARBINARY and BLOB columns behave the same, but the size restriction imposed on the LONGVARBINARY type allows this type to be mapped to an appropriate VARBINARY type on each supported database platform. On most database platforms, this type is more efficient than the respective BLOB type.
    To store binary strings with more than 2,000 bytes, the BLOB type has to be used.
    With Open SQL for Java you cannot store empty (zero-length) binary strings in any of these column types, because on some database platforms the empty string is always treated as a NULL value. If you need to represent a custom initial value, you should either use some outstanding value in accordance with your application or consider storing a NULL value explicitly. In the latter case, you should be aware of the NULL value handling in SQL comparisons.
    char(size)      Holds a fixed length string (can contain letters, numbers, and special characters). The fixed size is specified in parenthesis
    varchar(size)  Holds a variable length string (can contain letters, numbers, and special characters). The maximum size is specified in parenthesis
    tinytext          Holds a variable string with a maximum length of 255 characters
    text
    blob               Holds a variable string with a maximum length of 65535 characters
    mediumtext
    mediumblob   Holds a variable string with a maximum length of 16777215 characters
    longtext
    longblob        Holds a variable string with a maximum length of 4294967295 characters

  • Varbinary(max) parameters in Nested Stored Procedures by value or reference _Expand / Collapse

    Consider a situation where a stored procedure taking a varbinary(max) (BLOB) input parameter
    then calls a nested stored procedure and passes along that varbinary(max) as an input parameter to the nested stored procedure.
    Is a copy of the BLOB provided to the nested stored procedure (passed by value) OR is the BLOB passed by reference.
    My interest is in understanding the potential memory hit when handling large BLOBs in this environment.
    For example, if the BLOB is 200MB, will SQL server need to allocate memory for a new copy each time it's passed to another stored procedure at the next nestlevel?
    Looks like table type parameters are passed by reference, but I haven't been able to find any info on BLOBS in this context.

    The semantics for parameters in SQL Server is copy-in/copy-out. However, this does not mean that there cannot be optimizations. That is, there is no need to copy the BLOB, as long as the procedure does not change it. Whether they actually have such an optimization,
    I don't know.
    I composed the repro below, and it is sort of interesting. If I restart my instance (SQL 2014), the memory usage grows with about 40 M for each nesting call, which certainly indicates that the BLOB is copied over and over again. But if I then run it again,
    the growth is not the same. This may be because, it uses buffers already available. (A BLOB has to be a handle like a table when it is over some size.) Then again, it means that it is squeezing something else out of the cache.
    CREATE PROCEDURE K  @h varchar(MAX) AS
       SELECT SUM(pages_kb)*8192 AS K, datalength(@h) as Dlen FROM sys.dm_os_memory_clerks
       SELECT @h = ''
    go
    CREATE PROCEDURE L  @h varchar(MAX) AS
       SELECT SUM(pages_kb)*8192 AS L1, datalength(@h) as Dlen FROM sys.dm_os_memory_clerks
       EXEC K @h
       SELECT SUM(pages_kb)*8192 AS L2, datalength(@h) as Dlen FROM sys.dm_os_memory_clerks
       --SELECT @h = ''
    go
    CREATE PROCEDURE M  @h varchar(MAX) AS
       SELECT SUM(pages_kb)*8192 AS M1, datalength(@h) as Dlen FROM sys.dm_os_memory_clerks
       EXEC L @h
       SELECT SUM(pages_kb)*8192 AS M2, datalength(@h) as Dlen FROM sys.dm_os_memory_clerks
       --SELECT @h = ''
    go
    CREATE PROCEDURE N  @h varchar(MAX) AS
       SELECT SUM(pages_kb)*8192 AS N1, datalength(@h) as Dlen FROM sys.dm_os_memory_clerks
       EXEC M @h
       SELECT SUM(pages_kb)*8192 AS N2, datalength(@h) as Dlen FROM sys.dm_os_memory_clerks
       --SELECT @h = ''
    go
    DECLARE @h varchar(MAX) = replicate(cast('ABCD' AS varchar(MAX)), 1E7)
    EXEC N @h
    go
    DROP PROCEDURE K, L, M, N
    Erland Sommarskog, SQL Server MVP, [email protected]

  • [SOLVED] Broadcom BCM970015 mini-pcie crystal-hd card + VLC + YouTube

    Hi, since my notebook isn't powerful enough to play HD content I bought said card for about 42€ (the 970012 is much cheaper but doesn't support certain codecs) and installed it. Got it to work with VLC and YouTube pretty straightforward. For those who might be interested in the details, here's the story:
    Cloned the git repo from git.linuxtv.org/jarod/crystalhd.git
    and compiled the 'driver' and the 'linux_lib' and manually copied the firmware blob to /usr/lib.
    modprobe'd crystalhd.
    Noticed: Compiling gives an error until you add an
    #include <linux/delay.h>
    in crystalhd_flea_ddr.c. Wonder why they offer it buggy like that oO' but whatever.
    Cloned the vlc 2.1.0 repo from git.videolan.org/vlc.git
    -> bootstrap, configure --enable-crystalhd (I don't know if that parm is actually required, but what gives :-p), make, make install.
    Noticed here though that 'make install' actually will NOT correctly install it. A 'vlc --version' still shows the old one (1.1.13). So, copied the binary and created symlinks manually in /usr/bin.
    After this, invoking vlc with
    vlc --codec crystalhd
    will enable crystal-hd fine, as confirmed by the vlc command-line output you see.
    If it doesn't, see the debug forum thread I linked at the very end of this thread.
    NOTE: In 1080 I had smooth playback that was interrupted by "lags" every couple seconds. Turning up disk cache and file cache solved that for me. (Could also try turning down post-processing and loop filter, but maybe that isn't actually related?)
    About YouTube: It works fine, even 1080p (previously I couldn't even watch 720p). Full-screen mode is still a little bit jerky, but it's not the usual low-frame prob but some other odd sort of mini lags, well that's the hell of flash I guess.
    Although the latest flash (11.1.102.55-1) supports crystalhd out of the box it won't be enabled until you edit /etc/adobe/mms.cfg so that it has:
    EnableLinuxHWVideoDecode=1
    OverrideGPUValidation=true
    At first I was missing the GPU-line and it didn't accelerate.
    A final word - note that the crystalhd thingy only works single-threaded, basically if you have a YouTube video running, all other subsequently opened youtube videos or anything you play in VLC will NOT get hardware-acceleration because crystalhd is already occupied by the first video. So you cannot switch "on the fly" between videos, you have to close it first, then open another one!
    Also, when I loaded a new video file in VLC without closing the player first, I got a kernel panic :-p I didn't test though if that actually happens everytime. But it seems the driver could still require further improvements...
    Edit: I haven't attempted yet to compile ffmpeg with crystal-hd support, and I'm not sure if there are benefits over the pure VLC-solution. Let me know.
    Last edited by Jindur (2012-03-01 14:18:17)

    Jindur wrote:
    Also, when I loaded a new video file in VLC without closing the player first, I got a kernel panic :-p I didn't test though if that actually happens everytime. But it seems the driver could still require further improvements...
    Edit: I haven't attempted yet to compile ffmpeg with crystal-hd support, and I'm not sure if there are benefits over the pure VLC-solution. Let me know.
    Just a quick reply for the caches: got similar issues with CrystalHD and VLC, and busy fixing it. Turns out that the driver (latest version) misses a NULL pointer in DriverGetStatus(). Additionally, VLC still does something quirky while opening the driver, which doesn't occur with e.g. the latest XBMC, resulting in a resource exhaustion leading up to that driver issue. For the first one, here is a small patch (the raw, 'uncut' version):
    --- ../../../crystalhd/driver/linux/crystalhd_cmds.c 2012-03-18 15:59:58.743995349 +0100
    +++ crystalhd_cmds.c 2012-03-26 18:21:18.398150371 +0200
    @@ -745,7 +745,7 @@
    bool readTxOnly = false;
    unsigned long irqflags;
    - if (!ctx || !idata) {
    + if (!ctx || !idata || !(ctx->hw_ctx)) {
    dev_err(chddev(), "%s: Invalid Arg\n", __func__);
    return BC_STS_INV_ARG;
    @@ -828,6 +828,8 @@
    static BC_STATUS bc_cproc_reset_stats(struct crystalhd_cmd *ctx,
    crystalhd_ioctl_data *idata)
    + if (!(ctx->hw_ctx)) {
    + return BC_STS_INV_ARG; }
    crystalhd_hw_stats(ctx->hw_ctx, NULL);
    return BC_STS_SUCCESS;
    This still will produce a multitude of errors in VLC, which remain largely unfixed ; one option to make VLC behave a bit more nicely when the above error occurs is to add this patch to VLC's crystalhd.c in DecodeBlock():
    if (BC_FUNC_PSYS(DtsGetDriverStatus)( p_sys->bcm_handle, &driver_stat) != BC_STS_SUCCESS )
    p_dec->b_error=true;
    That's all for now, if I find more info I will share it here as this seems the only relevant thread on CrystalHD I found thus far
    edit 2012-03-29: the issue is actually in the closing/releasing of the crystalhd driver by VLC. Somehow the VLC player only releases the driver *after* the new playout has changed, and it does not seem like a race-condition. The problem no longer appears when I simply ignore this usage in "chd_dec_close()" in "driver/linux/crystalhd_lnx.c" in the latest CrystalHD Linux driver (comment the if block
    if(uc->in_use)
    ). This is a bad workaround, but it does the trick a bit better -> real fix is of course letting VLC handle the opening/closing of the driver better...
    Last edited by sander4 (2012-03-29 15:44:03)

Maybe you are looking for

  • JMF Diagnostics applet - how to resolve "JMF Classes not found"

    I am still having problems trying to get my USB camera to run in an applet. I'm using Windows XP with IE 6. I found the JMF Diagnostic Applet and when I run it, I get the following: Java 1.1 compliant browser.....Maybe JMF classes.....Not Found The t

  • How to ad Whitelist in OSX server 10.4.11?

    Customer with OS X Server mail 10.4.11 runs greet but after 1,5 year suddenly 1 customer is blocked by "Scan email for junk email" when its off email will be delivered, when activated its blocked. Is there a step by step manuel to ad the "blocked" se

  • Green flashing Facetime screen on Macbook Pro Retina.

    Long story: Purchased my Macbook Pro Retina in July 2012. No problems until 8 months later. At that time the computer would freeze and I would get the beach ball and have to re-start. I took it in for repairs, they replaced the logic board and reinst

  • Security exception when provisioning using multiple access policies

    We have upgraded our eDirectory connector to version 9.0.4.12. When provisoning manually all process tasks work correctly. However, when provisioning through an access policy or multiple access policies, once the edirectory Create User task runs it c

  • Urgent :Fetching CRM Activity categories to the master list category table

    Hi all,    I am working with groupware Integration. I am following "Best Practice" that provided in help.sap.com I m  using <b>mySAP CRM 4.0</b> it is in SP-09. the problem is in "Fetching CRM Activity categories to the master list category table" fo