Doubts in GeoRaster Concept.

Hi everybody,
I have few doubts in GeoRaster concepts.
I did mosaicing of multiple Georasater objects using "sdo_geor.getRasterSubset()" and able to display image properly. But while doing this I come across few people suggestions. They said that mosaicing multiple rows together in a GeoRaster table is not going produce meaningful results because the interpolation methods wont have access to the data in the adjacent cells at the seams because cell needed exist in a different row (i.e. where two rows of GeoRaster either abut or overlap).
I assume Oracle takes care of all this. Please suggest wheather my assumption is true or the statement given is true?
Regards,
Baskar
Edited by: user_baski on May 16, 2010 10:49 PM

Hi Jeffrey,
Requirements:-
I have to do mosaicing of 'n' number of Georaster objects. For eg, if table has 4 rows of GeoRaster object, then i have to create single image by mosaicing all the Georaster object based on the Envelope provided. (Note: I have to do this with Queries without using GeoRaster API)
Workflow:-
1. Get the connection and table details.
2. Retrieve necessary information from the db like SRID, MAXPYRAMID, SPATIALRESOLUTION, EXTENT etc. For getting extent, I used SDO_AGGR_MBR function.
3. With the help of "MDSYS.SDO_FILTER" and bouding box values, I create arraylist which contains raster id's retrieved from raster data table which covers the bouding box value provided in the filter command.
4. Then I passed bounding box value into "sdo_geor.getCellCoordinate" function and I retrieved row and column number of Georaster image and created a number array which contains starting and ending row/column numbers.
5. Then I had written a PL/SQL with "sdo_geor.getRasterSubset" function which takes the number array and raster id as input parameters, which inturn returns BLOB object.
6. I am executing step 5 in a loop with all the raster id's that I got at step 3. For eg, arraylist size is 4, then I will have four BLOB object.
7. Finally, I creating new image from the BLOB objects after some scaling and cropping based on the individual GeneralEnvelope of each raster id object.
I had followed all the above steps and successfully created mosaic image.However, few people suggested that mosaicing in this way does not produce meaningful results because the interpolation methods wont have access to the data in the adjacent cells at the seams because cell needed exist in a different row. I assume Oracle will take care of these things. Moreover, they suggested to keep single row in GeoRaster table instead of muliple rows of Georaster object and suggested to use "SDO_GEOR.updateRaster" function to update a part of the raster object and the pyramids are rebuild automatically.
So Please suggest which is the better way to do mosaicing. Wheather my assumption is correct or not?

Similar Messages

  • Doubt in Dataguard concept

    Dear all,
    Please help me regarding this i am very much confused.....
    I had couple of doubts in dataguard concepts.
    1)When archive log is transferred from primary to standby ,
    a.Whether DBWR will be in active state or not in standby server.To write the contents in the archived redo log files which came from the primary to the datafiles of the standby server.
    b.I am using online redo logs in the standby server not standby redo logs , whether online redologs in the standby server will have any effect in shipping of redologs from the primary database,
    c. In my standby database online redo logs state is changing between CLEARING AND CLEARING CURRENT.How standby server redologs will change it state,
    Regards,
    Vamsi.

    Hi again,
    They are not used in a physical standby database. They exist in order to be used in a case of opening the standby database read-write (failover/snapshot standby). Here what documentation says:
    Online redo logs
    Every instance of an Oracle primary database and logical standby database has an associated online redo log to protect the database in case of an instance failure. Physical standby databases do not have an associated online redo log, because physical standby databases are never opened for read/write I/O; changes are not made to the database and redo data is not generated.
    Create an Online Redo Log on the Standby Database
    Although this step is optional, Oracle recommends that an online redo log be created when a standby database is created. By following this best practice, a standby database will be ready to quickly transition to the primary database role.
    ...

  • Doubt in some Concepts

    Hi everyone,
    I have doubts with some security concepts, i've been reading documents, but i can clear this things up.... here's the thing....
    What exactly are a keystore, certificate and .pfx or p12 files?
    i understood that a pfx or p12 files are keystores whose have inside public and private keys.... but in my concept a certificate also have a public key and a private key..so i can say that keystores are certificates.... but i'm messing this concepts up, pretty sure of that....
    Could anyone help me?
    Thanks in Advance
    Edited by: cs.santos on Jun 16, 2009 12:28 PM

    1. Certificates only have public keys in them
    2. PFX and PKCS12 are essentially the same thing.
    3. PKCS12 is a format that is typically used to store a single keypair, both the public and private parts.
    4. KeyStore is a java class the provides storage for keys of all kinds, as the name implies. The KeyStore class supports different formats; one of these is PKCS12

  • Doubts on casting concepts in TAW12 part 1

    Can someone help me clarify a little confusion on Casting in materials provided by SAP Education on TAW12 part 1.
    I attended ILT and my instructor emphasized that: "wherever in this material we read 'Up-cast' or 'Down-cast', we should consider this as an error because what they meant to say is exactly the opposite".
    This has now left me confused especially after revising.
    Can someone please share some light on these terminologies and concept? especially in relation to materials provided on TAW12 ABAP Workbench Concept Part 1 (2013 SAP AG. All rights reserved)
    I have not been able to find much materials online for this section of the book and for many others.
    Any help would be greatly appreciated.
    Vince

    Hi Vince,
                   First thing, STOP worrying about the SAP material whether the contain is right/wrong.
                  Now, what I understand is that the instructor tried to explain the concepts in a easy way to remember but has that seems to have messed up your basics on up-cast & down-cast.
    I have looked in my sap material ( but its 2005 SAP AG reserved ).  The definition are fine.
    Anyways I'll try to explain in my style:
    Narrow casting( up-casting ):- Assigning of a subclass instance to a reference variable of type " reference to super class ". Here we navigate from a more detailed view to one with less details.
    Widening Cast( down-cast ):-  Assigning a super class to a sub class. From a less detailed view to more detailed view.
                             SUPER CLASS( vehicle ) less details
                                       |
                                       |
                                       |
                                SUB CLASS ( car, truck, bus, bike ) more details
    PS:- Try to understand the meaning from the example and then go back to the definition in the material.

  • Doubt in Overloading Concept in Packages

    Hi,
    I have two Stored Procedures are inside a package
    sp_mem(mCursor REFCURSOR Datatype,MemberId INTEGER);
    sp_mem(mCursor REFCURSOR Datatype, mEmailId INTEGER);
    In the Above Procedures if the second Parameter is Member Id then the first SP will be executed.
    In the Above Procedures if the Second Parameter is MemEmailId then the Second SP will be Executed.
    This is the normal OverLoading Concept inside the package. But for me, i am returning the Recordset using a RefCursor.
    I just want to know it is possible for returning the recordset using ref cursor with the overloading concept for both the SP.
    Thanks,
    Murali.V

    Hello
    You need to be careful with this type of overloading. Overloading is generally based around the position and data types of the parameters being supplied which are used to identify the "signature" of the particular procedure to call i.e.
    sp_myproc(ref cursor, integer)
    sp_myproc(ref cursor, date)
    sp_myproc(ref cursor, varchar2)
    sp_myproc(ref cursor, varchar2, integer)
    etc etc
    When you have 2 procedures that have the same signature, unless you specify the name of the parameter, there is no way to determine which procedure to call:
    tylerd@DEV2> CREATE OR REPLACE PACKAGE pkg_test_overload
      2  IS
      3
      4     PROCEDURE sp_mem(mCursor sys_refcursor,MemberId INTEGER);
      5     PROCEDURE sp_mem(mCursor sys_refcursor, mEmailId INTEGER);
      6
      7  END;
      8  /
    Package created.
    tylerd@DEV2>
    tylerd@DEV2> CREATE OR REPLACE PACKAGE BODY pkg_test_overload
      2  IS
      3
      4     PROCEDURE sp_mem(mCursor sys_refcursor,MemberId INTEGER)
      5     IS
      6
      7     BEGIN
      8
      9             NULL;
    10
    11     END;
    12
    13     PROCEDURE sp_mem(mCursor sys_refcursor, mEmailId INTEGER)
    14     IS
    15
    16     BEGIN
    17
    18             NULL;
    19
    20     END;
    21
    22  END;
    23  /
    Package body created.
    Package body created.
    tylerd@DEV2> var mycursor refcursor
    tylerd@DEV2> var memberid number
    tylerd@DEV2> exec pkg_test_overload.sp_mem(:mycursor,:memberid)
    BEGIN pkg_test_overload.sp_mem(:mycursor,:memberid); END;
    ERROR at line 1:
    ORA-06550: line 1, column 7:
    PLS-00307: too many declarations of 'SP_MEM' match this call
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    tylerd@DEV2> exec pkg_test_overload.sp_mem(:mycursor,MemberId=>:memberid)
    PL/SQL procedure successfully completed.
    tylerd@DEV2> exec pkg_test_overload.sp_mem(:mycursor,mEmailId=>:memberid)
    PL/SQL procedure successfully completed.To me, using named notation with overloading kind of defeats part of the objective...I'm not saying it's completely wrong, as sometimes I guess it's unavoidable, but I'd review the design carefully :-)
    HTH
    David
    Message was edited by:
    David Tyler
    Oops, copied the wrong bit of the output! :-)

  • A doubt about mailing concept

    Hi i just ended up completing mail servlet from where i can send mail through a SMTP and recieve at
    my POP account.Attachments too can be included/downloaded in both cases.
    1>
    The problem is that when i login to pop account and see the messages after it is retrieved
    , i get only new messages and not the ones which have been viewed already as we do in
    Outlook express.When i see them through outlook express first and then go to inbox from my servlet,those messages are not there and i get Empty.
    2>Every time retrieval process starts to show same messages again.
    How to handle this ?.I find illustration of codes in java-mail working in same way .

    If this is the limitation of Javamail as i have not
    seen a single code either in demo or anywhere then i
    would say it is good for nothingIt's not a limitation of Java. You need to understand the POP protocol. I don't think there's a way with POP to read a message and leave it on the server. As far as I know, once you read a message, it comes off the server. You need to manage its storage after that. The IMAP protocol allows you to read a message but leave it on the server. Read up on these protocols, and to through the [url http://java.sun.com/developer/onlineTraining/JavaMail/]JavaMail tutorial

  • Doubt about the concept of HashSet and LinkedHashSet

    I read one of the SCJP 6 exam book, when talking about Set, it gives one definition:
    When using HashSet or LinkedHashSet, the objects you add to them must override hashCode(). If they don't override hashCode(), the default Object. hashCode() method will allow multiple objects that you might consider "meaningfully equal" to be added to your "no duplicates allowed" set.
    What I am getting confused is that IF the objects we add to them must override hashCode(), we must override equals() method also! Isn't it?
    Edited by: roamer on 2009?10?23? ??10:29

    jverd wrote:
    endasil wrote:
    When using HashSet or LinkedHashSet, the objects you add to them must override hashCode(). If they don't override hashCode(), the default Object. hashCode() method will allow multiple objects that you might consider "meaningfully equal" to be added to your "no duplicates allowed" set.This really is completely wrong. Duplicates being added to your set has nothing to do with not overriding hashCode, and everything to do with not overriding equals. No, if you override equals but not hashCode, you can get dupes. That is, two items that your equals method says are equal can make it into the Set.Sorry, to clarify, what I meant by that was that to avoid duplicates of meaningfully equal objects, you must override equals, and overriding hashCode is just a consequent of overriding equals (to maintain the equal objects have equal hashcodes invariant). Overriding hashCode alone will not, and cannot, prevent meaningfully equal duplicates if you don't first define meaningfully equal by overriding equals.
    In summary, it's wrong because
    1) You don't have to override hashCode() if you don't override equals (or rather, don't deviate from the default "two objects are equal if and only if they are the same object").
    2) If you have any other definition of meaningfully equal that you want enforced in a Set, you must override equals to correspond to that definition.
    3) If you override equals, you must override hashCode by the contract of hashCode and equals.
    That's why I was saying that the statement was completely wrong. It's taking a backwards approach.
    By not overriding equals, you're saying that no two separate instances can be meaningfully equal, so the default hashCode is fine. Yeah, I kind of figured for the statement to be meaningful, it is assumed that you have already overridden equals. Otherwise what's the point of even mentioning something two objects "meaningfully equal"?All this talk of "meaningfully equal" makes me think the opposite: that they haven't overridden equals, or haven't discussed it yet. Otherwise you could just say that the objects are equal, or are equal according to equals().
    I would question the authority of the source that said that.I would quesiton the source's ability to express himself clearly. :-)Yeah, maybe that's all it is. In that case, I suggest (to the OP) reading [Effective Java, Chapter 3|http://java.sun.com/developer/Books/effectivejava/Chapter3.pdf]. Best source I've seen for beginners to make sense of all this.
    Edited by: endasil on 26-Oct-2009 1:11 PM

  • Doubt in ooops concept..

    hi,
      I am using NW 7.1 trial version. when I try to do a simple ALV prog in WebDyn .
    DATA: lr_column_settings TYPE REF TO if_salv_wd_column_settings,
    lr_input_field TYPE REF TO cl_salv_wd_uie_input_field.
    lr_column_settings ?= l_value.
    lr_column = lr_column_settings->get_column( 'PRICE' ).
    when I try to execute this error is coming says, "spelling or comma error ?= is not valid" in the particular line where we enter the ?= operator...
    its working in ECC6.0
    kindly tell me the way...

    Hi,
    You should ask your question in the right forum which should be *Web Dynpro Abap".
    You will get better feed back.
    Regards,
    Olivier

  • Doubt on basic  concept

    Say we have a class called Animal which has a method called shout().
    we have another class called Dog which has a method callled bark().
    Dog class extends Animal class.
    so Dog class has 2 methods now
    1)shout()
    2)bark()
    If we declare like this
    Animal a=new Animal();//valid statement
    a.shout();//valid statement
    Dog d=new Dog();//valid statement
    d.shout();//valid statement
    d.bark();//valid statement
    Animal a=new Dog();
    a.shout();//valid statement
    a.bark();//invalid since type of reference variable is Animal
    Dog d=new Animal();
    why the above decalration is not valid?
    Animal object has shout() method
    By inheritance Dog class has both shout() and bark() methods.
    So with above type of declaration d.shout() should execute.But that is wrong.
    A subclass reference variable cannt point to a subclass object.I know this kind of statement.But can any one explain why?

    kirn291 wrote:
    Dog d=new Animal();
    why the above decalration is not valid?To a Dog variable you can only assign objects which are of type Dog, and Animal isn't.
    A Dog object is both of type Dog and of type Animal. This means Dog objects can be assigned to both Dog variables and Animal variables.

  • Doubt Live cache concept

    Hi Guru's
    1.  Coluld any one tell the use of livecache in livecache server.
    2.why it required only for APO sever and why won't we use it in BW server.
    Thanks,
    Pavan

    you can get all the required info on help.sap.com just serch for livecache. and also more information available under SAP liveCache technology

  • Doubt in posting thread

    in which category i can post J2ME threads? please reply i have several doubt in J2ME concepts. i want to post many threads related to J2ME.

    There are no J2ME forums on OTN - you can try http://www.j2meforums.com/forum/ perhaps

  • Reg : Concept of Locks --

    Hi Experts,
    I've got few doubts understanding the concepts of Locking.
    I'm referring this article - http://docs.oracle.com/cd/E14072_01/server.112/e10592/ap_locks001.htm
    1]
    >
    When a transaction obtains a row lock for a row, the transaction also acquires a table lock for the table in which the row resides.
    The table lock prevents conflicting DDL operations that would override data changes in a current transaction.
    >
    What is the significance of the 2 ^nd^ line? Does that mean, at that time we can't DROP or ALTER the table structure ??
    Although, it is written very clear manner, i'm not able to understand it properly.
    2]
    >
    A row share lock (RS), also called a subshare table lock (SS), indicates that the transaction holding the lock on the table has locked some rows in the table and intends to update them, as is the case in a SELECT ... FOR UPDATE statement. An SS lock is the least restrictive mode of table lock, offering the highest degree of concurrency for a table.
    A row exclusive lock (RX), also called a subexclusive table lock (SX), indicates that the transaction holding the lock has made updates to rows in the table. An SX lock by itself allows other transactions to insert, update, merge into, or delete other rows in the table concurrently. Therefore, SX locks allow multiple transactions to obtain simultaneous SX and SS locks for the same table.
    >
    - What is the need of an SX lock when it allows other transactions to manipulate data?
    - Both SS and SX look like same kind of locks. Can any one please point out the difference with an example.
    3]
    >
    A share table lock (S) held by one transaction allows other transactions to query the table but allows updates only if a share table lock is held by only a single transaction. Multiple transactions may hold a share table lock concurrently, so holding this lock is not sufficient to ensure that a transaction can modify the table.
    A share row exclusive table lock (SRX), also called a share-subexclusive table lock (SSX), is more restrictive than a share table lock. Only one transaction at a time can acquire an SSX lock on a given table. An SSX lock held by a transaction allows other transactions to query or lock specific rows using SELECT ... FOR UPDATE, but not to update the table.
    An exclusive table lock (X) is the most restrictive mode of table lock, allowing the transaction that holds the lock exclusive write access to the table. Only one transaction can obtain an X lock for a table.
    >
    I'm also little confused with these locking types and not able to understand the difference practically. I know these are very important concepts and should not be neglected.
    Can any one please help me understand these concepts?
    Thanks In Advance,
    Ranit B.
    Edited by: ranit B on Nov 28, 2012 6:47 PM
    -- added [3]

    padders wrote:
    Does that mean, at that time we can't DROP or ALTER the table structure Yes, that's pretty much what it means.
    What is the need of an SX lock when it allows other transactions to manipulate data?As the text says - the lock allows concurrent update of other rows - i.e. you lock some row(s) but other rows can still be updated.Thanks Padders.
    But then, what is the difference between SS and SX? Both allow other rows to be updated.
    Please rectify me if I'm wrong.
    I got some idea. It would be really helpful if you could please give me some pointers on the other doubts also?

  • File mark() reset() doubt...

    Hi,
    I have a doubt regarding the concept of reset() and mark() methods of BufferedInputStream class. Here is the code
    class FileDemo
      public static void main(String[] args)throws IOException
              String st = "123456789";
              int c;
              byte[] buf = st.getBytes();
              ByteArrayInputStream in = new ByteArrayInputStream(buf);
              BufferedInputStream f = new BufferedInputStream(in);
              for(int i=0;i<8;++i)
              c = f.read();
              System.out.print((char)c);
              System.out.println();     
              f.mark(5);
              f.reset();
              c = f.read();
              System.out.print((char)c);
    Output
    12345678
    9In above code when i execute f.mark(5), the file pointer should make a mark at 5th byte i.e character 5, and when i execute reset the file pointer must point back to this 5th byte and must print 5 when i print for the next time. But instead the output i get is 9 after reset is executed. Please clarify.....

    In above code when i execute f.mark(5), the file
    pointer should make a mark at 5th byte i.e character 5No, it shouldn't.
    and when i execute reset the file pointer must
    point back to this 5th byte and must print 5 when i
    print for the next time. But instead the output i get
    is 9 after reset is executed. Please clarify.....The API documentation is pretty clear, I think, especially if you follow the links to the description of what mark() is supposed to do:
    "Marks the current position in this input stream. A subsequent call to the reset method repositions this stream at the last marked position so that subsequent reads re-read the same bytes.
    The readlimit argument tells this input stream to allow that many bytes to be read before the mark position gets invalidated."

  • How to renormalize number of flows in Netflow Sampled data

    Hi,
    I am working on extrapolation(renormalization) of bytes/packets/flows from randomly sampled (1 out of N packets) collected data. I believe bytes/packets can be renormalized by multiplying bytes/packets value in exported flow record by N.
    Now, I am trying to extrapolate number of flows. So far i have not got any information on it. Do you people have any idea on how flows can be renormalized from sampled data ?
    Well, at the same time i have some doubts regarding this concept altogether -
    1. In packet sampling, we do not know how many flows got dropped. Even router cache will not have entries for dropped flows
    2. In flow sampling, router cache will maintain entries of all the flows and there may be some way by which one can know how many actual flows were there. But again there is no way to know values of individual attributes in missed flows like srcip/dstip/srcport/dstport etc.(though they are there in flow cache)
    3. In case of sampling (1 out of N packets), we anyway multiply #packets and #bytes with N to arrive at estimate for total packets and bytes. When we multiply by N, it means we have taken into account all those packets as well which were NOT sampled. So, it means all the packets which flowed between source and destination have been accounted for. Then there are no missed flows, isn't it ? And if there do exist some missed flows then multiplication by N to extrapolate number of packets/bytes is not correct.
    4. What is the use of count of flows anyways. Number of flows may vary depending upon the configuration such as active timeout etc. So, it does not provide any information about the actual flow between source and destination unlike number of packets and bytes.
    Please share your thoughts.
    Thanks,
    Deepak

    The simplest way is to call GetTableCellRangeValues with VAL_ENTIRE_TABLE as the range, next summing array elements.
    But I don't understand your comment on checksum, so this may not be the more correct method for your actual needs: can you explain what do you mean?
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • Trigger is not getting disabled

    Hi ,
    I've a doubt on trigger concept.
    I've one table REF_cGSC_T, On this 2 triggers are written
    One is Blocking Delete operation Trigger
    Second one is Replicating Trigger {means I/U/D Operation trigger}
    So now for testing the second trigger(replicating trigger) i disabled the First trigger {blocking trigger)
    But in testing the second trigger i'm getting the message "DELETE IS NOT ALLOWED ON THIS TABLE"
    Then i checked the status of the Blocking trigger, I was shocked to see the status is ENABLED....
    Why it happens???? Do i need to do any changes to make my second trigger working properly....                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    did you check if Ttt.k_ttt.runTests enables the disabled trigger. because disabling trigger must work fine.
    SQL> create table t(no integer)
      2  /
    Table created.
    SQL> create or replace trigger t_block_insert before insert on t for each row
      2  begin
      3     raise_application_error(-20001,'Cannot perform insert');
      4  end;
      5  /
    Trigger created.
    SQL> insert into t values(1)
      2  /
    insert into t values(1)
    ERROR at line 1:
    ORA-20001: Cannot perform insert
    ORA-06512: at "SYSADM.T_BLOCK_INSERT", line 2
    ORA-04088: error during execution of trigger 'SYSADM.T_BLOCK_INSERT'
    SQL> alter trigger t_block_insert disable
      2  /
    Trigger altered.
    SQL> insert into t values(1)
      2  /
    1 row created.Edited by: Karthick_Arp on Oct 14, 2008 11:22 PM

Maybe you are looking for