DataSocket PSP Data Buffering

I have a VI that writes to a network shared variable using DataSocket.  The DataSocket URL uses PSP.  I have another VI that reads the network shared variable also using DataSocket.  I am experimenting with data buffering to see when data is lost if the Writer VI writes faster than the Reader VI.  Is data buffered using DataSocket with PSP in the URL?  If not, I expect data will be lost.  If it is buffered, I don't expect data to be lost until the buffer is full or overflows. 
Attached is a project with the network shared variables and the Writer and Reader VI.  VIs compare reading and writing directly using a shared variable node and using DataSocket.  With DataSocket, I am experiencing data loss as if there is no buffering.  When using the shared variable node, I do not see data loss.  Run the Reader.vi.  It will read two network shared variables every two seconds.  One variable is read using DataSocket and one is read using a variable node.  Next, run the Writer.vi.  It will write to two network shared variables every 0.5 seconds.  One variable is written using DataSocket and one is written using a variable node.  Since the Writer VI is writing four times as fast as the Reader VI data will need to be buffered to avoid data loss.  Monitor the Buffered Sequence and BufferedDS Sequence front panel indicators in the Reader VI.  Buffered Sequence is data from the variable node.  BufferedDS Sequence is data from the DataSocket Read.
Solved!
Go to Solution.
Attachments:
Net Share Var & DataSocket.zip ‏49 KB

Does PSP in the DataSocket URL change the data buffering?  Attached is a page from 'LabVIEW 8.5.1 help/fundamentals/networking in LabVIEW/concepts/choosing among LabVIEW communication features' mentioning lossless data transmission for DataSocket with psp protocol(2nd row in table).  Does lossless data indicate one packet will be guarantied to be sent from the writer and received by the reader; or, does it provide the guaranty with additional packets buffered?
Attachments:
LabVIEW Communication Features.pdf ‏61 KB

Similar Messages

  • Info on SAP JRA data buffering

    Hello,
    I'd need help on data buffering used with JRA remote function calls.
    This is what is written in the documentation.
    ●      DaysRetention
    The number of days the system keeps the data buffer entry
    ●      MaxRetryCount
    The maximum number of times you can resubmit requests
    ●      RetryInterval
    The number of milliseconds the system waits before resubmitting the query or action request. The scheduler adds one minute to this time.
    I'm wondering how to correctly populate these 3 values. It's not clear to me.
    Is the data buffering activated only if the communication via RFC is unavailable?
    If I'd like MII to repeat the RFC call for maximum ten times every 5 minutes, how can I configure the data buffering accordingly?

    Mauro,
    Data buffering is for errors in communication as stated in the first sentence under the Use heading in the help. You are interested in the MaxRetryCount and RetryInterval parameters. I am not sure if your situation calls for changing the DaysRetention parameter, the default is 7 days.
    So...
    MaxRetryCount = 10
    RetryInterval = 5 min (5 * 60* 1000) = 300,000 ms
    Or if the Scheduler's extra minute throws you off, use 4601000 = 240,000 ms
    Regards,
    Kevin

  • Binding a shared variable to a NI-PSP data object does not work

    Hi,
    I want to share data between a RT-target and one or more hosts (LV 8.6.1). The network shared variables are deployed to the RT-Target.  According to NI accessing shared variables from another project or host has to be done by defining a shared variable on the host and aliasing it to the NI-PSP data object on the target.
     I did that and the host shared variable generated an error (0x8BBB0011) during runtime.
    Next I aliased to a shared variable deployed on the host from another project. This did work.
    Another thing I tried was to bind the variable from the RT-target to a display element:
    This is working !!! And as you can see the path of the NI-PSP data object is exactly the same ! So what is the difference between binding a data object to a shared variable and to a display element?
    Is there a bug in the SVE or am I missing something here?
    The host project:
    The publisher VI
    Hope, someone has an answer.
    Regards
    Matthias Quade
    Solved!
    Go to Solution.
    Attachments:
    AliasTestWrite-RT.vi ‏8 KB
    AliasTestConsumer.vi ‏8 KB

    Dear Mr. Quade,
    thank you for posting at the National Instruments Forum. There is a known issue with the path of the bound variable with LabVIEW 8.6.1
    Please download the patch for LabVIEW 8.6.1, it should solve your problem:
    http://joule.ni.com/nidu/cds/view/p/id/1255/lang/de
    Best regards from Munich
    MarianO

  • HTTP XI  - Data Buffering

    Hi Everyone,
    I am using HTTP XI action block in Business logic transaction to send the XML document to PI system which in turn sends to ECC.
    I am trying to test the data buffering capability with following steps
    1. Locked PI user ID in PI system that is being used by this action block to communicate with PI.
    2.Then executed the transaction in MII, HTTP XI action block returns
    Success  property as "True" and the transaction processed successfully.
    3. After sometime, I unlocked PI user id in PI system.
    4. But I couldn't see the message in PI after I unlocked the user ID.
    Please advice what needs to be done. Is something I am missing here?
    Thanks
    Mahesh

    All,
    I configured the HTTP XI action block by setting the below value for parameters in the configuration link for data buffering.
    Property                 Value
    Day Retention          7
    MaxRetryCount        50
    MaxInterval              30000
    Processing Type     Exactly Once in Order
    Apart from the above, general parameters for connecting PI( server name, service , Interface) is given.
    To test this data buffering we bring down the PI system. What I see is
    1. During PI is down, we posted couple of transaction in MII. I have seen those transaction in data buffering screen of MII
       for a minute and after that it is disappeared.
    2. After 30 minute, we bring up the PI system, but I couldn't see any of the transaction data that went to PI from MII system.
      Also I couldn't trace those message in MII.
    I am not sure what configuration I miss to make this work.
    Appreciate If anyone could provide any input.
    Thanks
    Mahesh

  • Tracking objects in oracle data buffers

    hi all,
    i am trying to move data blocks from the cold region of "default pool" to the hot region of "keep pool" and would like to know if theres a sweet and simple way to find out and achieve this?
    comments and inputs would be appreciated.
    thanks.

    User1082,
    I'm not sure if I understand you correctly but would like you to note that the KEEP pool is separate RAM region +(db_keep_cache_size)+ in additon to the db_cache_size parameter, which creates the DEFAULT pool.
    Determining the actual size for data buffers can be critical depending on the size of the database, and with the KEEP pool there to have a database buffer hit ratio of 100%, it becomes a bit difficult to moving small tables and indexes into the KEEP pool. Hence, to achieve this, you should (a) be aware of the most frequently accessed tables in your DB, and (b) know of tables with high buffer residency.
    One recommended approach (by Donald Burleson) to assigning KEEP pool contents is to assign objects (with 80% or more data blocks in buffer) to the KEEP pool. This could either be executed manually or via dbms_job, on a rotationary basis. You can repeat the same for assigning other objects (with 80% or more data blocks in physical RAM) to the KEEP pool.
    For further understanding, please refer to Oracle Performance Tuning Guide provided by Aman.
    Hope this helps.
    Regards,
    Naveed.

  • Data Buffers and VMware

    I have virtualized a bare metal db machine.   The vm host is esxi 5.5.  The disk subsystem, as it pertains to sql, on vmware is much faster than bare metal.  However, cashed operation is taking much longer.  For instance, I have a query
    which joins 2 tables with no resulted rows.   In both cases, they should be in data buffers.  Nevertheless, bare metal takes 8 seconds to complete and virtualized takes 16 seconds to complete.  Why?  
    In both cases, I blew buffer, ran query twice and took the last result.  In both case disk activity was negligible.
    Thanks in advance

    Are these tables are properly updated for indexes and update stats? Please check this and confirm.
    Also what about execution plan as asked above?
    Please share the sql server version and check if both have same versions in terms of latest cu and hotfixes apart from same versions and SP.
    Santosh Singh

  • Conditional indicators and data buffers

    Hi,
    when I was reading help on VI memory usage http://zone.ni.com/reference/en-XX/help/371361G-01/lvconcepts/vi_memory_usage/, I did not completely understand the part about Conditional indicators and data buffers.
    Could someone provide one or two examples of it?
    Thanks,
    Andrej
    Solved!
    Go to Solution.

    Hi Jeff,
    Sorry that I still couldn't understand the Conditional indicators and data buffers.  
    I tried checking the code provided by you using "Show Buffer Allocation" in LV2012.  I can see that both the code allocates same number of bufferes. i.e., the one which has Conditional indicator also created a buffer as below (Notice black square dot in Add function).  If conditional indicator does create additional buffers as you explained in previous post, why equal number of buffer is created in non-conditional indicators also?  Can you please explain this?
    Thanks,
    Ajay. 
    Attachments:
    Conditional Indiator.png ‏34 KB

  • PR05 Credit card data Buffering issue

    Hello,
    I have an issue with Credit card data. I guess problem is with document buffering.
    Here is the information..
    I am using Tcode PR05 (Travel Expense Manager) for US travel & expenses. TRVPA CCC is set to 4.
    I have selected credit expense data to trip and when I double click on credit expense type for credit card data, I can see the credit card information. Then I replaced it with non credit expense type and checked for credit card data by double click on item. There is no credit card data, which is correct. Later I have replaced it with Old credit expense type, but when I tried to see credit card data, there is no credit card tab & information. I have debugged the entire code, but not found the solution.
    Please help me.
    Thanks

    hi,
    This works as design. Once you replace the credit card receipt to a non-credit card receipt, the first thing will happen it will delete the credit card information. Now if you replace this non-credit card with a old credit card then the information will
    stay the same without the credit card information unless this credit card came from the buffer.
    Regards,
    Raynard

  • Datasocket Server data persists

    If I write to the datasocket server, the data that was written will persist until it is overwritten. This allows it to be read my mulitple readers I guess. In my situation, however, I need to have the data on the server "erased" after it is read one time. Is there a server manager setting that will allow this?

    Hi,
    I think that Data Sockets are not that smart. They work as a global variable; what you write them, they stay until the Data Socket server is shutdown. What I did once was making the reader read the value and overwriting with a 0 value which would be the signal that there's no new value to be read. Another option is the writer to add a prefix to the real data (a 1 for instance) so the writer writes "1 + the data" and the reader reads it and writes a zero instead. The reader knows that if the string starts with 1, it has sense, filters it and uses the data. If it starts with zero, there's no new value to be read.
    Makes any sense?
    Marce

  • Clearing data buffered by smartforms

    Hi,
    Is there a way for me to clear data that is already buffered by standard SMARTFORMS program?
    Thanks.

    Hi,
    Please be more clear in your question. Do you mean that you want to clear all the data that has been passed to the form interface??
    Regards,
    Ram.

  • Datasocket and Shared Variables

    I am curious if there is any advantage to using Datasocket to read/write shared variables (as opposed to a direct read/write).  I'm specifically talking networked shared variables here.
    Is there any speed advantage to accessing shared variables thru the Datasocket functions?  Since both a direct read/write and a Datasocket PSP read/write talk to the same variable engine I assume they are equally efficient but I'm looking for confirmation here.  I've seen benchmarks for shared variable performance but none of them use DS/PSP to access the variables.
    Normally I would not even think of using Datasocket to access shared var's but where I currently work we have a large app that does this and it works great.  I suspect that this functionality only exists in LV8.x for backward compatibility and non-Windows OS compatibility and is not really meant to be used for new, Windows-based apps?   Am I off base on this?
    I am working in LV 8.5, BTW.....

    Hello Jared,
    Thank you for the reply with clarification. 
    Based on your comment, I changed the buffer parameters and also tried the programs with two different data types, previously StringArray and now String.
    In the attached LV8.6 project, you have all the programs, and shared variable library to review my tests. 
    There are two sets of two files - each set has a Write Shared Variable and Read Shared Variable file. One set is for StringArray type Shared Variable (named StrArr in the library), and the other set is for String type Shared Variable (named Str in the library).
    String Array example:
    MultipleDS-Write-SharedV-StrArr.vi / MultipleDS-Read-SharedV-StrArr.vi
    In my String Array shared variable, I use only 4 element array, each having 4 character strings - meaning 16 bytes per String Array data. I have two loops in the write file, writing to the same variable, an array of 4 strings, each loop continues until the loop index is >0. This means, sometimes, depending on the processor speed, the variable will be written 3 times or 4 times (the variable could have a new value before the loop condition is checked).
    So this means, if I have buffer of 100 bytes (16*4=64<100), it's enough for 4 such arrays (of 4 elements, each element with 4 characters) could be buffered to have sufficient time at the client (Read) program to read them. 
    I am putting 2048 bytes in buffer, which is much more than sufficient in my case. 
    The writer loops run with 200 ms to wait for each iteration. The reader loop runs with 100 ms in DS timeout and 100 ms in wait timer. This gives results without any loss. However, if I run the reader loop with 1000 ms to wait for each iteration, the data is lost. The buffer is not maintained for 2048 bytes.
    In the read program, just to make sure if all data is read or not, I am showing data in two different string indicators, showing data of each loop.
    String example:
    MultipleDS-Write-SharedV-Str.vi / MultipleDS-Read-SharedV-Str.vi 
    The String Array shared variable didn't show values in the Distributed System Manager. Hence, I created another simple variable with String datatype.
    The writer program writes strings of 4 characters, one-by-one, in two loops. Meaning, total 8 strings of 4 characters each are written in the "Str" Shared variable. 
    The reader program, however, doesn't always display all the 8 strings. Although the wait timer is not high (slow) it still misses some data usually. Data is overwritten even before the buffer is filled (in buffer, I have defined 50 strings with 4 elements in each).
    In both of the Read programs, I read using datasocket. I think thought datasocket has more ability to buffer. Earlier I had "BufferedRead" in DataSocket, which I have changed to just Read, because BufferedRead didn't give any special buffer advantage in the Shared Variable reading.
    ---- This is an update on the issue. 
    Ok, just while typing the last paragraph above, regarding datasocket, something clicked in my mind, and I changed the DataSocket functions to simple Shared variables (completely eliminating datasocket functions) in the read programs as well. And bingo, the buffer works as expected, even if I have reading loops very very slow, there is no data loss in any of the program sets. 
    The two changed Read programs are also included in the attached project - MultipleSV-Read-SharedV-Str.vi and MultipleSV-Read-SharedV-StrArr.vi
    So this means, I can completely eliminate DataSockets (not even using PSP URLs in DataSocket Open/Read functions) from my programs. 
    One question here, what will be an advantage of this (or any side effects that I should be keeping in mind)?
    Vaibhav
    Attachments:
    DataSocket.zip ‏71 KB

  • Data has changed after passing through FIFO?

    Dear experts,
    I am currently working on a digital triangular shaping using the 7966R FPGA + 5734 AI. I am using LabView 2012 SP1.
    Some days ago I have encountered a problem with my FIFOs that I have not been able to solve since. I'd be glad if somebody could point out a solution/ my error.
    Short description:
    I am writing U16 variables between ~32700-32800 to a U16 configured FIFO. The FIFO output does not coincide with the data I have been writing to the FIFO but is rather bit-shifted or something is added. This problem does not occure if I execute the VI on the dev. PC with simulated input.
    What I have done so far:
    I am reading all 4 channels of the 5734 inside a SCTL. The data is stored in 4 feedback nodes I am applying a triangular shaping to channel 0 and 1 by using 4 FIFOs that have been prefilled with a predefined number of zeros to serve as buffers. So it's something like (FB = Feedback node):
    A I/O 1  --> FB --> FIFO 1 --> FB --> FIFO 2 --> FB --> Do something
    A I/O 2  --> FB --> FIFO 3 --> FB --> FIFO 4 --> FB --> Do something
    This code shows NO weird behaviour and works as expected.
    The Problem:
    To reduce the amount of FIFOs needed I then decided to interleave the data and to use only 2 FIFOs instead of 4. You can see the code in the attachment. As you can see I have not really changed anything to the code structure in general.
    The input to the FIFO is a U16. All FIFOs are configured to store U16 data.
    The data that I am writing to the FIFO can be seen in channel 0 of the output attachment.
    The output after passing through the two FIFOs can be seen in channel 2 of the same picture.
    The output after passing through the first FIFO (times 2) can be seen in channel 3 of the picture.
    It looks like the output is bit-shifted and truncated as it enters Buffer 1. Yet the difference between the input and output is not exactly a factor of 2. I also considered the possibility that the FIFO adds both write operations (CH0 + CH1) but that also does not account for the value of the output.
    The FIFOs are all operating normally, i.e. none throws a timeout. I also tried several different orders of reading/writing to the FIFOs and different ways of ensuring this order (i.e. case strucutres, flat and stacked sequence). The FIFOs are also large enough to store the amount of data buffered no matter if I write or read first.
    Thank you very much,
    Bjorn
    Attachments:
    FPGA-code.png ‏61 KB
    FPGA-output.png ‏45 KB

    During the last couple of days I tried the following:
    1. Running the FPGA code on the development PC with simulated I/O. The behavior was normal, i.e. like I've intended the code to perform.
    2. I tested the code on the development PC with the square and sine wave generation VI as 'simulated' I/O. The code performed normal.
    3. I replaced the FIFOs with queues and ran my logic on the dev. PC. The logic performed totally normal.
    4. Right now the code is compiling with constants as inputs like you suggested...
    I am currently trying to get LabView 2013 on the development machine. It seems like my last real hope is that the issue is a bug in the XILINX 13.4 compiler tools and that the 14.4 tools will just make it disappear...
    Nevertheless I am still open for suggestions. Some additional info about my FIFOs of concerne:
    Buffer 1 and 2:
    - Type: Target Scoped
    - Elements Requested: 1023
    - Implementation: Block Memory
    - Control Logic: Target Optimal
    - Data Type: U16
    - Arbitrate for Read: Never Arbitrate
    - No. Elements Per Read: 1
    - Arbitrate for Write: Never Arbitrate
    - No. Elements Per Write: 1
    The inputs from the NI 5734 are U16 so I am wirering the right data type to the FIFOs. I also don't have any coercion dots within my FPGA VI. And so far it has only occured after the VI has been compiled onto the FPGA. Could some of the FIFOs/block memory be corrupted because we have written stuff onto the FPGA too often?

  • How to avoid "DBIF_RSQL_SQL_ERROR" while updating data to a ztable?

    Hi Friends,
    I am in urgent need of solution to reduce or come out DBIF_RSQL_SQL_ERROR at time of updating a Z table.
    the code are below mentioned.
    There is no inconsistency b/w table and internal table strcture.I also used commit work.Buat everthing is fail.
    "ORA-01438:
    11020   *LOGIC FOR CLAERING PREVIOUS RELEATED DATA IN TABLE
    11030     LOOP AT WA.
    11040       DELETE FROM YMATPRICNG WHERE WERKS = WA-WERKS.
    11050   *                         AND MATNR = WA-MATNR
    11060   *                         AND MAKTX = WA-MAKTX.
    11070   *                         AND V_BPMNG = WA-V_BPMNG
    11080   *                         AND V_BAMNG = WA-V_BAMNG
    11090   *                         AND V_INCREASE = WA-V_INCREAS
    11100   *                         AND V_DECREASE = WA-V_DECREAS
    11110   *                         AND INCREASE1 = WA-INCREASE1
    11120   *                         AND DECREASE1 = WA-DECREASE1
    11130   *                         AND NET = WA-NET.
    11140   COMMIT WORK.
    11150     ENDLOOP.
        >     MODIFY YMATPRICNG FROM TABLE WA.
    11170     COMMIT WORK.
    Please suggest me as soon as possible.

    Hi,
    I am deleting the first record which are already existing b'z I want only latest value update corresponding plant and material.
    And I had declared WA as a internal table type of ymatpricng table only.And again If put up few record to selection screen via material no,It update successfully.
    Only thing is that why it get cancelled with thsi dump showing that nametab and abap/4 is not consistent or buffer is out of time.
    The problem has arisen because, within the database interface,
    one of the data buffers made available for the INSERT (UPDATE)
    is longer than the maximum defined in the database.
    On the other hand, it may be that the length in the NAMETAB
    does not match the maximum length defined in the database.
    (In this case, the length in the NAMETAB is longer.)
    Regards
    Ricky

  • How to store data (in tdm files)

    I'm looking for advice on how to store data in tdm/tdx files.
    The main challenge is like this: At regular time intervals there is new data available (say every 10 minutes there are 10 minutes of new data available). All data is time based. So every ten minutes I may create a new tdm/tdx file containing this data segment. However, when i want to analyze the data i dont necessarily want to view 10 minutes of data from all channels, but maybe 20 hours of data from one particular channel.
    The way I've achieved this so far is to manually load each 10 minute segment of this particular channel and then add these parts together. This is both time consuming and cumbersome. Is there a better way to do this that I've simply not discovered yet? Is there a better way to store data to simplify this process?
    One solution is of course to save data from several 10-minute frames in one file, but seeing as there is a never ending supply of data i cant simply save all data in one giant file, it has to be split up at some point, and the problem will still remain.
    One factor to have in mind here is that this is rather large amounts of data (maybe 10GB each day), so the option to simply load all the data into memory goes out the window rather quickly.
    Feel free to ask if I've not made myself very clear

    Hi salte,
    If your test ever ended, and if you had LabVIEW 8.20 or higher, I would recommend using a TDMS file, which handles data appeands flawlessly.  DIAdem 10.1 now also does data reduction and index windowing during file loading for TDM / TDMS / DataPlugin files, so that you could easily load out only the part of the file you wanted to look at.  But since you describe your data acquisition as never stopping and amassing 10 GB per day, I agree that it would be impractical to use only 1 data file.  So we are stuck with some number of files which each contain a part of your measurement.  This approach can have advantages, since you can save operational properties for each "buffer" such as average value, dominant frequency, ambient room temperature, etc., and later on you can use the DataFinder to query out only the data buffers which meet specific conditions based on these properties.  The problem remains to load and assemble data based on multiple files.  This is an old problem in DIAdem, and one for which I have an efficient and I hope satisfactory workaround application.  It does what you describe already doing in the minimum amount of time and with the minimum amount of user interaction, and it can be highly parametrized to suit your particular situation.  The ideal way to start the application would be as part of a ResultsList custom menu.  This would enable you to query out the buffers you wanted, highlight those rows in the ResultsList, right-click and choose your custom menu, and WHAM! the selected buffers for your queried channel(s) are automatically appended together in the DataPortal.  Launching from a ResultsList custom menu would mean that you could skip the file dialog and just read out the data sources directly from the ResultsList selection.
    Let me know what you think,
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments
    Attachments:
    Importing Data from Multiple DataPlugin Files.zip ‏198 KB

  • Using CLOB data type - Pros and Cons

    Dear Gurus,
    We are designing a database that will be receiving comments from external data source. These comments are stored as CLOB in the external database. We found that only 1% of incoming data will be larger than 4000 characters and are now evaluating the Pros and Cons of storing only 4000 characters of incoming comments in VARCHAR2 data type or using CLOB data type.
    Some of the concerns brought up during discussion were:
    - having to store CLOBs in separate tablespace;
    - applications, such Toad require changing defaults settings to display CLOBs in the grid. Default value is not to display them;
    - applications that build web page with CLOBs will be struggling to fit 18 thousand chararcters of which 17 thousand are blank lines;
    - cashing CLOBs in memory will consume big chunk of data buffers which will affect performance;
    - to manipulate CLOBs you need PL/SQL anonymous block or procedure;
    - bind variables cannot be assigned CLOB value;
    - dynamic SQL cannot use CLOBs;
    - temp tables don't work very well with CLOBs;
    - fuzzy logic search on CLOBs is ineffective;
    - not all ODBC drivers support Oracle CLOBs
    - UNION, MINUS, INTERSECT don't work with CLOBs
    I have not delt with CLOB data type in the past, so I am hoping to hear from you of any possible issues/hastles we may encounter?

    848428 wrote:
    Dear Gurus,
    We are designing a database that will be receiving comments from external data source. These comments are stored as CLOB in the external database. We found that only 1% of incoming data will be larger than 4000 characters and are now evaluating the Pros and Cons of storing only 4000 characters of incoming comments in VARCHAR2 data type or using CLOB data type.
    Some of the concerns brought up during discussion were:
    - having to store CLOBs in separate tablespace;They can be stored inline too. Depends on requirements.
    - applications, such Toad require changing defaults settings to display CLOBs in the grid. Default value is not to display them;Toad is a developer tool so that shouldn't matter. What should matter is how you display the data to end users etc. but that will depend on the interface. Some can handle CLOBs and others not. Again, it depends on the requirements.
    - applications that build web page with CLOBs will be struggling to fit 18 thousand chararcters of which 17 thousand are blank lines;Why would they struggle? 18,000 characters is only around 18k in file size, that's not that big to a web page.
    - cashing CLOBs in memory will consume big chunk of data buffers which will affect performance;Who's caching them in memory? What are you planning on doing with these CLOBs? There's no real reason they should impact performance any more than anything else, but it depends on your requirements as to how you plan to use them.
    - to manipulate CLOBs you need PL/SQL anonymous block or procedure;You can manipulate CLOBs in SQL too, using the DBMS_LOB package.
    - bind variables cannot be assigned CLOB value;Are you sure?
    - dynamic SQL cannot use CLOBs;Yes it can. 11g supports CLOBs for EXECUTE IMMEDIATE statements and pre 11g you can use the DBMS_SQL package with CLOB's split into a VARCHAR2S structure.
    - temp tables don't work very well with CLOBs;What do you mean "don't work well"?
    - fuzzy logic search on CLOBs is ineffective;Seems like you're pulling information from various sources without context. Again, it depends on your requirements as to how you are going to use the CLOB's
    - not all ODBC drivers support Oracle CLOBs not all, but there are some. Again, it depends what you want to achieve.
    - UNION, MINUS, INTERSECT don't work with CLOBsTrue.
    I have not delt with CLOB data type in the past, so I am hoping to hear from you of any possible issues/hastles we may encounter?You may have more hassle if you "need" to accept more than 4000 characters and you are splitting it into seperate columns or rows, when a CLOB would do it easily.
    It seems as though you are trying to find all the negative aspects of CLOBs and ignoring all the positive aspects, and also ignoring the negative aspects of not using CLOB's.
    Without context you're assumptions are just that, assumptions, so nobody can tell you if it will be right or wrong to use them. CLOB's do have their uses, just as XMLTYPE's have their uses etc. If you're using them for the right reasons then great, but if you're ignoring them for the wrong reasons then you'll suffer.

Maybe you are looking for