Buffer operation

hi,
i am using oracle spatial java api release 8.1.6 (one with adapter, geom, sref, util packages). Is there any way i can use the oracle spatial buffer operation which is provided in newer versions of oracle spatial java libraries

I might be mistaken but there is no spatial buffer operation provided in the sdoapi java libraries that ship with 10.2.0.x, or the 10.1.x version downloadable from OTN. The javadoc entry is a doc bug.
Jayant

Similar Messages

  • Can i connect an external signal generator to the NI7344 to...

    Can i connect an external signal generator to the NI7344 to use its
    PID characteristics to precisely control the output
    I am using the flex motion board (NI7344) to control the force output
    of a linear motor. Using contouring and buffer operations i have
    successfully created a system which can output a controlled force
    which sinusoidal in form. I can achieve a reasonable output for
    waveforms of 10Hz, the use of higher frequencies is limited by the
    contouring operation ( a frequency of 10 Hz means i can only describe
    one period of a sine wave with 10 points ( 10ms between each point =
    10Hz). Can i therfore connect an external anologue wave generator to
    the NI7344 use and use its PID characteristics to output wave
    forms of
    higher frequency ??? .....could i output an anolgue wave form from an
    E-series DAQ via the RTSI cable to the NI7344 and control this ???.any
    ideas or advice would be much appreciated.

    Duncan,
    You can use a DAQ card to output an analog waveform, but it will be a software call not part of an onboard program. RTSI can be used to pass clocks and trigger signals, so if you want to route a clock signal to do the control you could.
    A. Talley

  • Buffered Messaging Problem

    I'm just starting out with Streams and was working with the example in Chapter 23 of "Oracle Streams Advanced Queuing User's Guide and Reference".
    It works. Then I decided I wanted to use a buffered queue for non-LCR messaging. So I changed the enq_proc such that
    enqopt.delivery_mode := DBMS_AQ.BUFFERED;
    enqopt.visibility := DBMS_AQ.IMMEDIATE;
    The procedure compiles on 10.2.0.1 on Windows XP but when I try to enqueue an ANYDATA message here's what I get:
    BEGIN
    oe.enq_proc(ANYDATA.convertobject(oe.order_event_typ(
    2500,'05-MAY-01','online',117,3,44699,161,NULL,'APPLY')));
    END;
    BEGIN
    ERROR at line 1:
    ORA-25292: Buffer operations are not supported on the queue
    ORA-06512: at "SYS.DBMS_AQ", line 243
    ORA-06512: at "OE.ENQ_PROC", line 11
    ORA-06512: at line 2
    Part of Metalink doc id 230901.1 says "In 10.2 a new feature was added for Oracle Streams AQ called Buffered Messaging. Buffered messaging enables users and applications to enqueue messages into and dequeue messages from a buffered queue." but I'm not finding out exactly how to do that.
    Any help is greatly appreciated.

    If I understand the problem correctly...
    I think you might want either one of the two things:
    1. create a new instance of your object instead of re-using the one already sent.
    2. if you HAVE to re-use the same object, you must clear the object client-side, otherwise its signature will remain the same, and the receiver will figure it's already been downloaded. You must then implement a separate flag to tell the receiver that the data has changed.
    Hope this helps,
    Eric

  • How to speed a function to save-retrieve a class?

    The next code is My function to save and restore any class using a byteBuffer
    It works fine in both directions, but I want to speed it.
    For example the lines that make the conversion are too dense:
    if (tipo.equals("double")) {BBuff.putDouble((Double) value);}
    By other side, there is a error handling....
    Any idea ?
    Thanks
    private void object (Object OBJ,boolean read){
         String tipo="";
         Object value = null;
         Field[] fields=OBJ.getClass().getFields();
         for (Field FF:fields){               
         try {
         value = FF.get(OBJ);                    
         } catch (IllegalArgumentException e2) {
         e2.printStackTrace();
         } catch (IllegalAccessException e2) {
         e2.printStackTrace();
         if (FF.getType().getName().contains("."))
         object(value,true);     // call again            
         else
         tipo=FF.getType().getName();
         if (read==true)
         if (tipo.equals("double")) {BBuff.putDouble((Double) value);}
         if (tipo.equals("int")) {BBuff.putInt((Integer) value);}
         if (tipo.equals("float")) {BBuff.putFloat((Float) value);}
         else
          try {
         if (tipo.equals("int"))        FF.setInt(OBJ, BBuff.getInt());
         if (tipo.equals("double")) FF.setDouble(OBJ, BBuff.getDouble());
         if (tipo.equals("float"))  FF.setFloat(OBJ, BBuff.getFloat());
         value = FF.get(OBJ);     
         catch (IllegalArgumentException e) {                         
         e.printStackTrace();
         } catch (IllegalAccessException e) {
          e.printStackTrace();

    Edit: this is all assumming you go ahead with what you're trying to do. I don't personally think it's a very good idea; there's better ways of going about it, as already mentioned.
    Ok. First off, I don;t see the need for the call of "value = FF.get(OBJ);" at the end of the loop. It's going to get called again at the start of the next loop, so that's unnecessary.
    Boxing and unboxing the primitives that you're working with will give you a speed hit, and I think you should be able to avoid it. Before you call FF.get(OBJ), test whether FF.getType().getName(); gives the name of a primitive. If it's an int, declare an int reference and use Field's getInt() method and ByteBuffer's putInt() method, and so on.
    As I said, I'd strongly recommend you separate it into a read() and a write() method. This is not just for neatness or readability, there's unneccesary work being done. The first try block is still executing even if it's a read-from-byte-buffer operation, which is unnecessary. Also, where is BBuff defined? It should probably be passed in as an argument.
    Before you make any changes for speed testing, make a test run that saves & loads say 10,000 objects and time it, then compare this after any changes.

  • Sdo_buffer question

    I am hoping that someone can help me. I know this is a super easy fix. I am looking to creat a buffer around one xy coord. Am I on the right tract.
    select sdo_geom.sdo_buffer(MDSYS.SDO_GEOMETRY(2001,8307,MDSYS.SDO_POINT_TYPE(2560808.6,387514.844,0),NULL,NULL), m.diminfo,100,.0005)
    from tractblockpoint2000 t, user_sdo_geom_metadata m
    select sdo_geom.sdo_buffer(MDSYS.SDO_GEOMETRY(2001,8307,MDSYS.SDO_POINT_TYPE(2560808.6,387514.844,0)
    ERROR at line 1:
    ORA-13205: internal error while parsing spatial parameters
    ORA-06512: at "MDSYS.SDO_3GL", line 439
    ORA-06512: at "MDSYS.SDO_GEOM", line 3065

    The answer is yes. It is easy to buffer a point.
    There are more questions to ask and answer here, though, because your data is not valid.
    your sdo_gtype = 2301: this is not allowed. If I was to guess from looking at your data, I would guess you want 2001.
    your sdo_srid = 8307: this is fine if your coordinates are between -180 and 180 in longitude (x), and -90 to 90 in latitude (y). Your data is not in this range.
    if I was writing this, i would use the sdo_point_type instead of sdo_elem_info_array and sdo_ordinate array.
    A two-d point in the sdo_ordinate_array should only have 2 ordinates (x and y, not x,y,null) - if stored in the sdo_point_type, then you need x, y, and null.
    you don't need a select ... from dual to buffer the geometry, you can call the function in the operator.
    Arc tolerance of 1/2 meter (0.5) is very fine - you will get a geometry with lots of vertices, and it may take a long time to get results. Below is an example of your query, rewritten, with a 10 meter tolerance. That said, further below is the same query written as sdo_within_distance, which includes an implicit, highly accurate buffer operation and an anyinteract.
    Note since these queries are in a geodetic coordinate system and you didn't specify units, the default is meters.
    The rewritten queries use the geod_cities table which is downloadable from OTN ( http://otn.oracle.com/products/spatial ), click on training on the right, then exercise solutions.
    Your query rewritten:
    select city
    from geod_cities t
    where sdo_relate(t.location, sdo_geom.sdo_buffer( mdsys.SDO_GEOMETRY(2001,8307, sdo_point_type(-75.13,40,NULL),null,null), 1000, 0.05, 'arc_tolerance=10'), 'mask = anyinteract querytype=window') = 'TRUE';
    But best done using sdo_within_distance:
    select city
    from geod_cities t
    where sdo_within_distance(t.location, mdsys.SDO_GEOMETRY(2001,8307, sdo_point_type(-75.13,40,NULL),null,null), 'distance=1000') = 'TRUE';

  • Programmin​g Agilent N5242A throgh VISA command using LAN

    hi,
    I'm trying to program Agilent N5242A throgh VISA command using LAN with no success.
    The connection to the N5242a is working but when I'm using the visaWrite command it doesn't response.
    I used Agilent "connection Expert tool" and it worked.
    I tried to use Measurment & automation but there is no LAN option there, I have version 4.5.
    This is what I tried to do:
    status = viOpenDefaultRM (&defaultRM);
       if (status < VI_SUCCESS)
          printf("Could not open a session to the VISA Resource Manager!\n");
          exit (EXIT_FAILURE);
       /* Now we will open a session via TCP/IP to ni.com */
       //status = viOpen (defaultRM, "TCPIP0::ftp.ni.com::21:OCKET", VI_NULL, VI_NULL, &instr);
       status = viOpen (defaultRM, "TCPIP0::169.254.73.18::5025:OCKET", VI_NULL, VI_NULL, &instr);
       if (status < VI_SUCCESS)
          printf ("An error occurred opening the session to TCPIP0::ftp.ni.com::21:OCKET\n");
          viClose(defaultRM);
          exit (EXIT_FAILURE);
       viSetAttribute (instr, VI_ATTR_TCPIP_NODELAY, VI_TRUE);
      // status = viWrite (instr, "INIT:CONT ON", 13, &count);
          status = viWrite (instr, "*RST",4 , &count);
          status = viWrite (instr, "SENSe1:FREQuencyTARt 4000000000", 33, &count);
          status = viWrite (instr, "SENSe1:FREQuencyTOP 7000000000", 32, &count);
    Best Regards
    Israel
    Best Regards
    Boris

    TCPIP SOCKET (or SCPI-Raw) connection normally requires termination code (Line Feed, 0x0A) being sent, but not found in your viWrite() call.
    Write as like:
    status = viWrite (instr, "*RST\n", 5, &count);
    Also I think viPrintf() will be easier than viWrite() for ASCII write, because you dont have to specify the byte-length to send. Once you set buffer operation attributes like below:
    status = viSetAttribute( instr, VI_ATTR_WR_BUF_OPER_MODE, VI_FLUSH_ON_ACCESS);
    status = viSetAttribute( instr, VI_ATTR_RD_BUF_OPER_MODE, VI_FLUSH_ON_ACCESS);
    then you can now use viPrintf() like below:
    status = viPrintf( instr, "SENSe1:FREQuencyTOP %ld\n", (ViInt32)7000000000);

  • Phone firmware upgrade

    How can new phone firmware be installed on BE3K phones?  I'd like to upgrade 6945's to 9.3(1) to implement new jitter buffer operation.
    Thx, Jim

    Hi Jim
    1) You need to copy the firmware file to USB key or SFTP server.  Then select the file from the location
    2) Yes
    Uploading the firmware should take about 10 minutes. Since reboot is required, there is additional 20-25 downtime because of it.
    Once the server and phone is rebooted, it should get the new firmware. Each release has a software compatibility matrix. Please check that it is according to the table below
    http://www.cisco.com/en/US/docs/voice_ip_comm/cucmbe3k/compat/3kcompmtx.html
    Thanks

  • ORA-25292: which setting up CDC

    I'm trying to setup CDC between 9.2 source and 10.2 stage. After a little bit of struggle the setup went fine. But I'm getting "ORA-25292: Buffer operations are not supported on the queue" in the alert log on source db.
    Has anyone tried CDC between 9.2 and 10.2 ? or faced the above error in any other place?
    Regards,
    Ali

    I am setting up streams between 9.2.0.4 and 10.2.0.1. I am getting similar error. I found some notes on metalink regarding this, Note:4285404.8 and Note:358776.1.
    Let me know what you findout.

  • Calculations competing for resources when they run in parallel

    I have an allocation process where I run 5 different calcs in 5 different
    cubes (With exactly same structure and settings) to speed up my process.
    -None of them are using CALCPARELLEL
    -We have 6 CPU's with 4 cores each.(Total 24 processors)
    -Enough RAM for 5 cubes
    - AIX 64 bit OS
    Even though I have adequate resources to run the calcs in parallel, For some
    reason calcs are waiting for other to complete. 5 calcs are starting at same
    time, which are almost should take same time based on what they are
    calculating, but they seem like waiting for some resources to free up. Run
    times for these calcs are very random, for Ex: If calc1 takes 1 hour calc5
    takes 20 mins in first test, Calc 1 takes 20 mins and Calc 5 takes 1 hour in
    the next test.
    Did anybody see this scenario before? Any idea on what resource the calcs
    are competing for? Please shed some light on it.
    Thanks,
    SR.

    OK Let me first confirm some things from your previous emails.  When you say ASO Calcs you do mean "custom calcs" (why not post your maxl scripts) ?  Hopefully you are not referring to aggregations as calcs.
    You say you are doing calcs on multiple cubes concurrently - please post the script
    You say you are loading from BSO calcing and then using a report to extract from ASO.  Are you doing and aggregation after the data load?  Remember that when aggregations run the whole cube is "frozen" and the agg is given exclusive use of the cube.  I suspect that you should have no aggregations defined for these cubes.
    Finally, you say you are doing multiple loads/calcs/extracts in each of your 5 cubes.  How - post the scripts please.  You have read the section of the dbag below regarding load buffers and custom calcs right?
    Understanding Data Load Buffers for Custom Calculations and Allocations
    When performing allocations or custom calculations on an aggregate storage database, Essbase uses temporary data load buffers. If there are insufficient resources in the aggregate storage cache to create the data load buffers, Essbase waits until resources are available.
    Multiple data load buffers can exist on a single aggregate storage database. The data load buffers that Essbase creates for allocations and custom calculations are not configurable. You can, however, configure the data load buffers that you create for data loads and postings.
    If you want to perform allocations and custom calculations concurrently with data loads and postings, set the resource usage for the data load buffers that you create for data loads and postings to a maximum of 0.8 (80%). The lower you set the resource usage setting, the greater the number of allocations and custom calculations that can run concurrently with data loads and postings. You can also configure the amount of time Essbase waits for resources to become available in order to process load buffer operations.
    To configure data load buffers, use the alter database MaxL statement with the initialize load_buffer grammar and ASOLOADBUFFERWAIT configuration setting.
    I question whether this section of the documentation is correct.  I would think that a calculation would need exclusive access to the cube otherwise it would be calculating against a moving target.   I will try to reach out to someone at Oracle to ask the question.

  • TCP Provider: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.

    Hi All,
    We have snapshot replication . Job is failing in between due to below error.
    Error messages:
    The process could not connect to Subscriber 'XX:SD'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20084)
    Get help: http://help/MSSQL_REPL20084
    TCP Provider: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full. (Source: MSSQLServer, Error number: 10055)
    Get help: http://help/10055
    A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information
    see SQL Server Books Online. (Source: MSSQLServer, Error number: 10055)
    Get help: http://help/10055
    Login timeout expired (Source: MSSQLServer, Error number: HYT00)
    Get help: http://help/HYT00
    Please suggest

    Hi Khushi,
    According to your description, your snapshot replication job fails and you come across the error related to TCP Provider. Based on my research, the issue could occur as the following two situations.
    1. OS runs out of memory for TCP buffers. When people use the two switch: the /PAE switch in the c:\boot.ini file and the /3gb switch in the boot.ini file. If the applications require many OS resources, such as by opening many TCP connections, this can cause
    the OS to run out of memory for resources like TCP buffers.
    2.OS runs out of available TCP “ephemeral” ports. When the client machine is opening many TCP connections and is running Windows Server 2003, Windows XP, or any earlier version of Windows, it may run out of TCP “ephemeral” ports.
    To work around the issue, please follow the two solutions as below.
    1.Remove the /3gb switch from c:\boot.ini. The root problem in this case is memory pressure on the OS, so removing the /3gb switch will give more memory to the OS and will alleviate this problem.
    2.Make more ephemeral ports available following the steps in the article:
    http://support.microsoft.com/kb/196271
    For more information about the process, please refer to the article:
    http://blogs.msdn.com/b/sql_protocols/archive/2009/03/09/understanding-the-error-an-operation-on-a-socket-could-not-be-performed-because-the-system-lacked-sufficient-buffer-space-or-because-a-queue-was-full.aspx
    Regards,
    Michelle Li

  • What causes BUFFER GETS and PHYSICAL READS in INSERT operation to be high?

    Hi All,
    Am performing a huge number of INSERTs to a newly installed Oracle XE 10.2.0.1.0 on Windows. There is no SELECT statement running, but just INSERTs one after the other of 550,000 in count. When I monitor the SESSION I/O from Home > Administration > Database Monitor > Sessions, I see the following stats:
    BUFFER GETS = 1,550,560
    CONSISTENT GETS = 512,036
    PHYSICAL READS = 3,834
    BLOCK CHANGES = 1,034,232
    The presence of 2 stats confuses. Though the operation is just INSERT in database for this session, why should there be BUFFER GETS of this magnitude and why should there by PHYSICAL READS. Aren't these parameters for read operations? The BLOCK CHANGES value is clear as there are huge writes and the writes change these many blocks. Can any kind soul explain me what causes there parameters to show high value?
    The total columns in the display table are as follows (from the link mentioned above)
    1. Status
    2. SID
    3. Database Users
    4. Command
    5. Time
    6. Block Gets
    7. Consistent Gets
    8. Physical Reads
    9. Block Changes
    10. Consistent Changes
    What does CONSISTENT GETS and CONSISTENT CHANGES mean in a typical INSERT operation? And does someone know which all tables are involved in getting these values?
    Thank,
    ...

    Flake wrote:
    Hans, gracias.
    The table just have 2 columns, both of which are varchar2 (500). No constraints, no indexes, neither foreign key references are in place. The total size of RAM in system is 1GB, and yes, there are other GUI's going on like Firefox browser, notepad and command terminals.
    But, what does these other applications have to do with Oracle BUFFER GETS, PHYSICAL READS etc.? Awaiting your reply.Total RAM is 1GB. If you let XE decide how much RAM is to be allocated to buffers, on startup that needs to be shared with any/all other applications. Let's say that leaves us with, say 400M for the SGA + PGA.
    PGA is used for internal stuff, such as sorting, which is also used in determing the layout of secondary facets such as indexes and uniqueness. Total PGA usage varies in size based on the number of connections and required operations.
    And then there's the SGA. That needs to cover the space requirement for the data dictionary, any/all stored procedures and SQL statements being run, user security and so on. As well as the buffer blocks which represent the tablespace of the database. Since it is rare that the entire tablespace will fit into memory, stuff needs to be swapped in and out.
    So - put too much space pressure on the poor operating system before starting the database, and the SGA may be squeezed. Put that space pressure on the system and you may enbd up with swapping or paging.
    This is one of the reasons Oracle professionals will argue for dedicated machines to handle Oracle software.

  • Buffer(sort) operator

    Hi,
    i'm trying to understand what "buffer sort" operation is in the following explain plan:
    0 SELECT STATEMENT
    -1 MERGE JOIN CARTESIAN
    --2 TABLE ACCESS FULL PLAYS
    --3 BUFFER SORT
    ---4 TABLE ACCESS FULL MOVIE
    In Oracle 9i DataBase Performance Guide and Reference, "buffer sort" is not mentioned although all other explain plan's operations are.
    What does it mean? Does it take place in main memory or is it an external sort?
    Thank you.

    A BUFFER SORT typically means that Oracle reads data blocks into private memory,because the block will be accessed multiple times in the context of the SQL statement execution. in other words, Oracle sacrifies some extra memory to
    reduce the overhead of accessing blocks multiple times in shared memory.
    Hope this will clear your doubts.
    Thanks.

  • Operating System's  I/O Buffer SIZE

    Hi There,
    can anybody tell me how to count Operating systems I/O Buffer size..?
    i have Win 2000 Server with Ora. 9.2 on it
    Thanx In Advance!!

    Hi,
    Run some statspack snapshots, then take some reports, you will find indication if this parameter is set as well for your application into average blocks by read into datafile section.
    db_file_multiblock_read_count=16Take care, a high value encourage full table scan over index usage.
    Nicolas.

  • Get I/O buffer size of operating system...

    Hi ,
    Is there any script in Oracle which will display the I/O buffer size of operating system i use???
    I need it in order to set appropriately the parameter DB_FILE_MULTIBLOCK_READ_COUNT.
    Thanks , a lot
    Simon

    Hi ,
    I found a script in order to find the IO buffer size of operating system. This is as follows:
    create tablespace tester datafile 'C:\oracle\product\10.2.0\oradata\EPESY\test.dbf' size 10M reuse
    default storage (initial 1M next 1m pctincrease 0)
    create table testing tablespace tester
    as select * from all_objects
    where rownum<50000
    select relative_fno from dba_data_files
    where tablespace_name='TESTER'
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    select count(*) from testing
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    In an example - explaining the above example- there were the following figures:
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    PHYRDS PHYBLKRD
    154 1220
    The test ends up by dividing the PHYBLKRD by the PHYRDS . So in the above example it yields a result of 7.92 which is close to 8 - so the effective multiblock read count is 8.
    NOTE : The words underlined are of the author words not mine.
    I run the above script and i found a figure about 10.85. So which may be the effective multiblock read count...??????
    Thanks , a lot
    Simon

  • Fastest array operations on a 2D circular buffer

    I'm trying to create a a type of circular buffer on large-ish set of 2d data which are samples of voltage from a data acquisition board.  The boards are Measurement Computing and a lot of the nice programming features built into DAQmx for NI boards aren't easily available to me.
    I'm grabbing chunks of samples, 1000 per channel * 64 channels, every 100ms.  I'm calling this a 'page'.  For each page of samples I take a median for noise filtering and then I publish this median for multiple threads to use.  I also want to be able to string together pages of samples as if it were one longer data acquisition, for up to 30 seconds.  I'm doing it this way because the threads that are expecting their data every ~100ms can't release these AI boards for long periods of time to allow other threads to use them to perform long scans.  The data coming back from my boards is a 2D array.
    I have enough RAM available to pre-allocate the memory to hold all these pages and I've been playing with the In Place structures for awhile now and I haven't been able to land on the magical combination that will allow me to replace any page in the buffer.  It's easy enough using the subarray option of the in Place to replace either the first page or the last page but it gets more complicated to do a page somewhere in the middle without having to resort to Case statements.  I tried to do it with nested In Place structures but it seems as if the subarrays that get created in the lower levels already go out of scope by the time the top level gets it assigned and I just get jibberish on the output side.  Does this make sense?

    SmokeMonster wrote:
    I tried to do it with nested In Place structures but it seems as if the subarrays that get created in the lower levels already go out of scope by the time the top level gets it assigned and I just get jibberish on the output side.  Does this make sense?
    Sorry, but at least to me, this doesn't make sense.  Can you post your code?  I can't see how "out of scope" is a concept that applies here - LabVIEW keeps track of the memory for you and should never lose track of memory that's still in use.
    I posted one approach to a 2D circular buffer; maybe it's of some use to you.

Maybe you are looking for