Multiple JNDI connections and connecting to multiple schemas

Currently I have some code that makes a connection to a specific schema in the oracle database.
I need to somehow connect to more than one schema and do some resultset processing on tables in two different schemas.
How can I do I establish the connection to two schemas with JNDI?
Is this possible?
Thanks.

have u looked at XA datasources and distributed transactions

Similar Messages

  • Does Hibernate supports multiple schemas connection ?????

    i want to connect to multiple schema's on the same data base using
    hibernate...
    my hbm.xml looks like this
    <hibernate-mapping>
    <class name="SampleClass" table="SampleTable" schema="DynamicSchema">
    </hibernate-mapping>i need to enter the schema name as parameter
    and change it dynamically....
    is this possible or are there another way in hibernate ???
    thanks in advance and
    some code wouldn't kill any one :)

    To set schema dynamically the org.hibernate.mapping.Table class has a method setSchema(String)
    http://www.hibernate.org/hib_docs/v3/api/org/hibernate/mapping/class-use/Table.html

  • Failover and Load Balancing with JNDI Connection Pools

    Hi,
    I am trying to figure out how would JNDI Connection Pooling work along with failover or DNS Load Balancing.
    Would connections be distributed equally among the list?
    Would the pool work with multiple heterogeneous connections (i.e. connections to different but equivalent servers ), or do all the connections in the pool have to be homogeneous (i.e. to the same server)?
    Thanks,
    Sergio

    Hi,
    I am trying to figure out how would JNDI Connection Pooling work along with failover or DNS Load Balancing.
    Would connections be distributed equally among the list?
    Would the pool work with multiple heterogeneous connections (i.e. connections to different but equivalent servers ), or do all the connections in the pool have to be homogeneous (i.e. to the same server)?
    Thanks,
    Sergio

  • Crystal Reports 2008 - Eclipse and JNDI connection.

    I am evaluting Crystal Reports 2008 .
    I want to use JNDI connection that is defined within my app server by Crystal reports so that I'll have better control of the properties defined.
    How do I set up JNDI data source connection in Crystal Reports 2008 ?
    I defined the following parameters u2026
                    JNDI Provider URL : jnp://localhost:1099
                    Initla Context as :  org.jnp.interfaces.NamingContextFactory
    And user name / password  as defined in JBoss setup
    I also have  u201Cjboss\client\jbossall-client.jaru201D in the classpath .     
    But I am getting Cannot instantiate class exception .
    Thanks in Advance for the help.

    Mr. Ted, We can expose JNDI connections from the appserver. Infact the database connection pool resoures are exposed as JNDI resources, JMS factory and connections are exposed as JNDI etc.
    In fact, JNDI is a means of securely exposing the resources managed by the app server to be used by trusted clients.
    We have achieved this on weblogic (8.1 - JDK 1.4.2) but we would like to replicate the same in glassfish app server ver 3.1.2 and would require the proper factory placed inside the rpt as well as the CRConfig.xml.
    Otherwise, (if we are using conventional URL string) when we ship the rpt files, we need to place this only as localhost. This mandates the presence of rpt files to be in the same server as database server and database. We want to avaoid this by creating a JNDI and use that database access resource inside the rpt files.

  • Problem with JNDI/LDAP AND connection pool

    I'm a newbie to Java but am attempting to write a servlet that retrieves info use to populate the contents of drop down menus. I'd like to only have to do this once. The servlet also retrieves other data (e.g. user profile info, etc ...). I'd like to be able to use the connection pool for all of these operations but I'm getting a compile error:
    public class WhitePages extends HttpServlet {
    ResourceBundle rb = ResourceBundle.getBundle("LocalStrings");
    public static String m_servletPath = null;
    public static String cattrs = null;
    public static String guidesearchlist[] = {};
    public static int isLocalAddr = 0;
    private int aeCtr;
    private String[] sgDNArray;
    private HashMap sgDN2DNLabel = new HashMap();
    private HashMap sgDN2SearchGuide = new HashMap();
    private String strport;
    private int ldapport;
    private String ldaphost;
    private String ldapbinddn;
    private String ldapbindpw;
    private String ldapbasedn;
    private int maxsearchcontainers;
    private int maxsearchkeys;
    private String guidesearchbases;
    private String guidecontainerclass;
    private String strlocaladdr;
    private String providerurl;
    // my init method establishes the connection
    // pool and then retrieve menu data
    public void init(ServletConfig config) throws ServletException {
    super.init(config);
    String strport = config.getInitParameter("ldapport");
    ldapport = Integer.parseInt(strport);
    String strconts = config.getInitParameter("maxsearchcontainers");
    maxsearchcontainers = Integer.parseInt(strconts);
    String strkeys = config.getInitParameter("maxsearchkeys");
    maxsearchkeys = Integer.parseInt(strkeys);
    ldaphost = config.getInitParameter("ldaphost");
    ldapbinddn = config.getInitParameter("ldapbinddn");
    ldapbindpw = config.getInitParameter("ldapbindpw");
    ldapbasedn = config.getInitParameter("ldapbasedn");
    guidesearchbases = config.getInitParameter("guidesearchbases");
    guidecontainerclass = config.getInitParameter("guidecontainerclass");
    strlocaladdr = config.getInitParameter("localaddrs");
    providerurl = "ldap://" + ldaphost + ":" + ldapport;
    /* Set up environment for creating initial context */
    Hashtable env = new Hashtable(11);
    env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
    env.put(Context.PROVIDER_URL, providerurl.toString());
    /* Enable connection pooling */
    env.put("com.sun.jndi.ldap.connect.pool", "true");
    StringTokenizer st = new StringTokenizer(guidesearchbases, ":" );
    String guidesearchlist[] = new String[st.countTokens()];
    for ( int i = 0; i < guidesearchlist.length; i++ ) {
    guidesearchlist[i] = st.nextToken();
    // Get a connection from the connection pool
    // and retrieve the searchguides
    StringBuffer asm = new StringBuffer(""); // This is the advanced search menu htmlobject buffer
    StringBuffer strtmpbuf = new StringBuffer(""); // This is the simple search menu htmlobject buffer
    try {
    StringBuffer filter = new StringBuffer("");
    filter.append("(objectclass=" + guidecontainerclass + ")");
    String[] attrList = {"dn","cn","searchguide"};
    SearchControls ctls = new SearchControls();
    ctls.setReturningAttributes(attrList);
    ctls.setSearchScope(SearchControls.SUBTREE_SCOPE);
    String attrlabelkey;
    sgDNArray = new String[guidesearchlist.length];
    for( int i = 0; i < guidesearchlist.length; i++ ) {
    // Search each of the namingspaces where
    // searchguides exist then build
    // the dynamic menus from the result
    DirContext ctx = new InitialDirContext(env);
    NamingEnumeration results = ctx.search(guidesearchlist, filter, ctls);
    I get a compile error:
    WhitePages.java:164: cannot resolve symbol
    symbol : method search (java.lang.String,java.lang.StringBuffer,javax.naming.directory.SearchControls)
    location: interface javax.naming.directory.DirContext
    NamingEnumeration results = ctx.search(guidesearchlist[i], filter, ctls);
    ^
    WhitePages.java:225: cannot resolve symbol
    symbol : variable ctx
    location: class OpenDirectory
    ctx.close();
    ^
    Can anyone help? If there is someone out there with JNDI connection pool experience I would appreciate your assistance!

    Manish
    The issue may not be related to the number of connections or the initial
    connections. Check your heap size (ms, mx). Turn on verbosegc. Your heap may
    not be big enough to accept the 25,000 rows.
    Bernie
    "Manish Kumar Singh" <[email protected]> wrote in message
    news:3e6c34ca$[email protected]..
    We are creating the result set with 25000 rows(each row has 56 columns) bygetting the connection using data source. With the initial capacity of the
    connection pool is 5 and the max capacity as 30 and grow connection as 1,
    the server gets out of memory exception, when we issue a new request, even
    after closing the previous connections.
    Now, if we change the initial capacity to 1 and rest all the things assame, the issue gets resolved and the server works fine.
    Could you please help me out in this regard????
    thanks in advance
    manish

  • Using both 837 P multiple and 837 P single schema

    I have one application which is using 837P Muliple achema........Now i have another application which needs to use 837P single schema....
    but how to use 837P Single schema......As both have same namespace and root node..........so i cant use schema from that application also and i cant deploy my schema also....Is there any solution for this situation.....

    Can you elaborate on what you mean by "Multiple schema" vs "Single schema"... it seems like the single schema could be satisfied by the multiple schema (assuming a difference between repeating vs non-repeating structure(s)).
    Assuming you need to deploy both schemas several options are available.
    If the documents are received on different transport URIs, you can dedicate a different BizTalk application for each schema type.
    If the documents are received on different transport URIs, you can change the namespace of one of the schemas.
    If the documents are received on the same transport URIs, you may run into issues disassembling the messages, as the disassembler (EDI or flat file) may match to the wrong schema during the Probe() method call.  This option will require more discussion
    to determine what you may need to do.
    David Downing... If this answers your question, please Mark as the Answer. If this post is helpful, please vote as helpful.

  • Sun JSAS P.E 9.0 and JNDI connection

    Hi all I am guessing this is probably in the wrong spot, so please correct me if there is a better place to put it. I am trying to setup a JNDI connection to a database, but am confused by the parameters that are required.
    I am expected to fill in the following:
    Resource Type:
    JNDI Lookup:
    Factory Class:
    But I haven't the faintest idea what these are supposed to represent. Could anyone point me to either a site with an example configuration or give me an idea of what this means ?
    thanks in advance.
    Ps. I have tried to search the web with no real results, but if they are out there let me know and I will go back to searching.

    The JDBC forum is a more appropriate place than the
    new to java. However, you have not given much
    information for anyone to work with.
    How are you setting up the connection? parameters for
    what? etc.I want to use JNDI and I am setting up it under external resources in the Server.
    It gives this: "Create an external JNDI resource so that applications can gain access to resources stored in an external respository."
    And the items I listed are those things that are required for the server.
    I am listing as much as I am aware of at this time. (Sorry if it is insufficient to help)
    PS. (Is there a method to move this post to the JDBC forum ? )
    Message was edited by:
    Aknibbs

  • Configuring Multiple Schemas in Oracle ADF application

    Hi,
    Thanks all for the replies for my previous posts :-)
    We have a requirement in which we need to configure our application for multiple Database schema. At runtime based upon the logged in user we need to get data from the schema related to that user.
    These schema may be on same Database instance/server or may be on different DB servers.
    The requirement is like this
    Logged in User DB schema configured
    "A" ASchema @ 127.0.0.1
    "B" BSchema @ 127.0.0.2
    "C" ASchema @  127.0.0.1
    "D" Dschema @ 127.0.0.2
    based on the User login (which is happening through one OID system), I need to fetch data from different schema (configured for different application).
    These schema have different set of tables but set of table we require for our application are of same structure.
    Same structure means same Table names with same relations.
    Please let me know can we do the same in Oracle ADF application and How?_
    One more requirement is that if i add a user at runtime and configure the DB schema for this user, All data related to this users will be loaded from this Schema which is configured for this user.
    Thanks in Advance,
    Amit
    newbie to Oracle ADF
    Edited by: ur.amit on May 10, 2010 3:17 PM

    You can try the below example to change database at run time.
    in the filter change the connection and not the credentials
    http://www.oracle.com/technetwork/developer-tools/jdev/dynamicjdbchowto-101755.html#03.
    Most of the other part is done to handle state management tables, which you might also have to do unless you use a file passivation store or if its ok to passivate to a single database always.
    If its ok to passivate to a single db always. Hard code the jbo.server.internal_connection in the AM configurations to a DB jndi name.
    Adf allows you to activate and passivate to a different database other than what you are connected to get and put data.

  • Help With Multiple Schemas In Multiple Environments

    Dear Oracle Forum:
    We have a bit of controversy around the office and I was hoping we could get some expert input to get us on the right track.
    For the purposes of this discussion, we have two machines, development and production. Currently, on each machine, we have one database with multiple schemas, say, one for sales data and another for inventory. The sales data has maybe 200 tables and the inventory has another 50. About 12 times a year, once a month, we have a release and move code from dev to prod. The database is accessed by several hundred Pro*C and Pro*Cobol programs for online transaction processing.
    The problem comes up when we need to have multiple development environments. If I need to work on something for May that requires the customer address field to be 50 characters and somebody else is working on something for July that requires the customer address field to be 100 characters, we can’t both function in the same schema. We have a method of configuring running programs to attach to a given schema/database. Currently, everything connects to the same place. We were told that we should not have the programs running as the owners of the schemas for some reason so we set up additional users. The SALES schema is accessed with the connect string: SALES_USER/[email protected]. (I don’t know where we got dot world from but that is not the current discussion.)
    One of the guys said that we should have 12 copies of the database running, which is kind of painful to think about in my opinion. Oracle is not a lightweight product and there are any number of ancillary processes that would have to be duplicated 12 times.
    My recommendation is that we have 12 schemas each for sales and inventory with 12 users each to access them. We would have something like JAN_SALES_USER, FEB_SALES_USER, etc. Each user would have synonyms set up for each of the tables it is interested in. When my program connects as MAY_SALES_USER, I could select from the customer table and I would get my 50 character address field. When the other user connects as JUL_SALES_USER, he would get his 100 character address field. Both of us would not know anything different.
    Another idea that came up is to have a logon trigger that would set the current schema for that user to the appropriate base schema. When JUL_SALES_USER logs in, the current schema would be set to JUL_SALES, etc. This would simplify things by allowing us to avoid having something like 2400 synonyms to maintain (which could be automated without too much difficulty) but it would complicate things by requiring a trigger.
    There are probably other ways to go about this we have not considered as yet. Any input you can give will be appreciated.
    Regards,
    /Bob Bryan

    Hans Forbrich wrote:
    I'd rather see you with 12 schemas than with 12 databases. Unless you have lots of CPUs to spare ... and lots of cash to pay for those extra CPU licenses.
    Then again, I'd take it one step further and ask to investigate the base design. There should be little reason to change the schema based on time. Indeed, from what little I know of your app, I'd have to ask whether adding a 'date' column and appropriate views or properly coded SQL statements might simplify things. Interesting. If we were to have one big Customer table with views for each month, how would we handle the case where the May people have to see 50 character address and July have to see a 100 character address field. I guess we could have MAY_ADDRESS VARCHAR2(50) and JULY_ADDRESS VARCHAR2(100) and take care to make sure that people connecting as May can only see the May columns, etc. This is simpler than multiple schemas?
    I may have overly simplified things in my effort to get something down that would not require too much explanation. The big thing is that multiple people are doing development and they have to be independent of each other. If we were to drop a column for July, the May people will have trouble compiling if we don’t keep things separate. It is not a case of making the data available. The data in development is something we cook up to allow us to test. The other part is the code we compile now will be released to production one of these times. In production, there is only a need for one database.
    We are moving from another database product where multiple databases are effectively different sets of files. We have lots of disk space so multiple databases were no problem. Oracle is such a powerful product; I can’t believe there is not some way to set up something similar.

  • One dump file inport into multiple schema

    Hi,
    I need to import one dumpfile to multiple schema but it's not working.
    impdp "'system/redhat as sysdba'" dumpfile=CTBNGUMB_14022011.dmp logfile=CTBNGUMB_IMP_1602.log directory=DUMP_DIR remap_tablespace=NGUMB_TS:TRUSTWARE01 remap_tablespace=TRUSTWARE:TRUSTWARE01 remap_tablespace=NEXTGENREF:TRUSTWARE01
    remap_schema=CTBNGUMB:CTBNGUMB_1,CTBNGUMB:CTBNGUMB_2,CTBNGUMB:CTBNGUMB_3,CTBNGUMB:CTBNGUMB_4
    how can import one dumpfile to multiple schema?
    Edited by: 819136 on Feb 16, 2011 5:53 AM

    Hi,
    I want same schema import into different schams (users) is it possible?
    OS Version is Linux 4
    Oracle DB Version is : 10.2.0.4
    While importing getting the below error
    [oracle@sos02 10.2.0]$ impdp "'system/redhat as sysdba'" dumpfile=CTBNGUMB_14022011.dmp logfile=CTBNGUMB_IMP_17022011.log directory=DUMP_DIR remap_tablespace=NGUMB_TS:TRUSTWARE01 remap_tablespace=TRUSTWARE:TRUSTWARE01 remap_tablespace=NEXTGENREF:TRUSTWARE01 remap_schema=CTBNGUMB:CTBNGUMB_1 remap_schema=CTBNGUMB:CTBNGUMB_2 remap_schema=CTBNGUMB:CTBNGUMB_3 remap_schema=CTBNGUMB:CTBNGUMB_4
    Import: Release 10.2.0.4.0 - Production on Thursday, 17 February, 2011 9:24:32
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39046: Metadata remap REMAP_SCHEMA has already been specified.

  • Database Design - multiple schemas

    Hi!
    We're currently designing a DB for an AUTHENTICATION SYSTEM where several users from different companies (around 40) will have to be authenticated -- connected to ORACLE. Authentication and faster recovery is important.
    Recovery/Backup
    An issue raise where what if the schema encountered a problem then of course you have to backup the entire data. So we are considering to use multiple schemas.
    One Company = One Schema
    So if one schema is down then other schemas will not be affected and faster to recover.
    Actualy, we're quite hesistant to use multiple schemas because of maintainability -- Managing different schemas and too much burder for our developers.
    Will the idea of having multiple schemas be advantageous to what we want to achieve?
    Is this a good design or any other idea to handle this kind of situation?
    Can Partitioning do the same?
    Thanks a lot

    Advantages of multiple schemas:
    - each schema is entirely separate
    - you can maintain at different times/dates for different companies
    - different schemas could be on different databases / servers
    Disadvantages
    - any 'shared' data may have to be duplicated (but you can always use a shared schema for reference data)
    - yes, you have to maintain each schema separately (but that would be by scripts, and at least they'd be well tested!)
    The dictionary (SYS tables) will be somewhat larger (40 copies of table, index, pl/sql definitions)
    - you'll have 40 identical sets of SQL cached; they all look the same, but relate to different schemas. So you need a bigger SGA.
    Can Partitioning do the same?No - partitioning is a solution to a physical problem, not a security problem
    Is this a good design or any other idea to handle this kind of situation?I think either way works - it depends on size, number of users, are you using a third tier, etc.
    Or, with a single schema, you can use VPD - virtual private database (otherwise known as FGAC - fine grained access control or RLS - row level security).
    See eg http://builder.com.com/5100-6388_14-5062064.html and also Ask Tom http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:70287097313911 which refers to the documents.
    You can also implement a kind of VPD on the cheap by using user defined namespaces and the SYS_CONTEXT function, combined with application logic and clever view definitions.
    HTH
    Regards Nigel

  • Oracle Portal w/ Multiple Schemas

    I posted this in the Architecture forums, but it looks like this forum may be better or yield more results for what I'm looking for.
    We are just about to launch a Portal initiatve and are looking for best practices for implementing a single instance of Oracle Portal that interacts with multiple schemas. Has anyone here done this or know where I can find this type of info? I have a few questions about how this works, how scalable it is, how security is handled, yada yada yada. --Kind Regards, Derek.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Hi jlubbers. Thanks for your reply.
    After some additional disucssions with our DBAs, I think I may have left a misleading post in respect to schemas. Our company acts as an application service provider to some degree. The way the databases are setup, each of our client agencies have their data stored completely separately from other agencies. The table structures are almost exactly identical (and can be considered identical for the scope of this discussion) and will never be merged together.
    We want to leverage Oracle Portal as much as we can while also getting as much reuse as we can. Each time we get data from the database, we want the datasource to be dynamic based on the usergroup.
    One constraint that we have is that the physical database connection must be connected as the user that's logged into the portal, so I think we have to do some type of User Proxy, but we're not 100% sure how to do that and what that means for all of our java applications.
    I'll check out your other posts and see if there is anything in there that may help. Thanks for taking the time. --Derek                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Multiple schemas

    A project here has the following requirement.
    Want to use multiple schemas in the same database instance with identical structure. That is, from one login we want to determine the schema name to use at runtime and access it using the same mapping descriptors.
    Is this possible with TopLink? Thanks.
    Haiwei

    Hello James
    Thanks for the information. However the setting does not seem to make any difference.
    For ex: I was trying to fo the following.
    The Serversession is created with a connection pool with the Database user TESTUSER. Then I get a clientSession and set the Tablequalifier on this.
    ClientSession clientSession = getServerSession().acquireClientSession();
    clientSession.getLogin().setTableQualifier("SCOTT");
    unitOfWork = clientSession.acquireUnitOfWork();
    Then I expected the SQL statements like
    SELECT empname FROM SCOTT.emp;
    but it is not happening and the queries are generated with out any qulifier to the table name.
    Am I missing any thing? I am using Oracle TopLink - 10g Release 3 (10.1.3.3.0)
    Thanks and Regards
    Potu

  • JDBC(JNDI):Connection dinamically managed

    Hi all,
    I'm using
    1) Crystal Reports 2008 for the reports' design,
    2) Java Reporting Component (version 18.8.4.1094) for the integration in java environment,
    3) JBoss (version 4.2.3) as Application Server
    4) Oracle 11 as DBMS
    in a web application.
    When I create a report using JDBC (JNDI) and specify as the "Connection Name (optional)" the following string "java:jdbc/name_data_source",
    How is managed the connection of the report and any sottoreports? In other words are the connections dynamically managed by the container web or ejb?
    Thank you very much.

    Hi, Andrea,
    I'm gonna' make a few guesses here:
    2) Java Reporting Component (version 18.8.4.1094)
    I think you mean Crystal Reports for Java (CRJ) 2.8 version 12.2.209.1094. Neither the JRC (the name for the 1.x series) nor the CRJ (the 2.x series) has any version number in the 18.x range.
    sottoreports
    I think that means "subreports."
    are the connections dynamically managed by the container web or ejb?
    There are two ways the connections can be managed. If you've created a JNDI connection (a named connection), then you can base a report off of it, then at a later date change where the JNDI connection itself points to and the report should still work correctly, as long as the new target database shares the same schema as the original.
    The other way you can manage the connection is to use code to change the database connection information inside the report itself. In this way, you could change it from one JNDI connection to another or even to another type of connection altogether. Again, however, the schemas between the old and the new connections must match.
    Regards,
    Bryan

  • Multiple Schemas under one user account with XE 10g

    Hi,
    I am using (learning) XE 10g. I would like to know if it is possible to have multiple schemas under one user account and have the schemas logically separated. As of right now, I have three schemas that I am working with, each one under a different user account. This is inconvenient, because I have to logout of one user account and login to another user account simply to be able to work with another schema.
    Thanks

    It isn't possible to have multiple schemas under one database user account. It is of course possible to grant rights to other database users, and or roles, in order to allow access to the tables/data from other accounts. In Oracle there is a one-one mapping between schema and user.
    Niall Litchfield
    http://www.orawin.info/

Maybe you are looking for

  • Email PDF smart form Support Desk

    Hi SDN, I'm working on support desk - solution manager, and i need to send the solution of the support message by email to the final user. There is an standard action (SLF1 -SMSD_SERVICE_ORDER_DNO_OUTPUT)  that sends the solution in pdf format by ema

  • Is there a way to get all the lengends in individual lines

    Hi, This is the first time am using JFreechart. If we have multiple line grpahs then each will have individual grpahs and corresponding legends. These lengends depeneding on their string length are displayed one after other in same line or in the nex

  • Can't Drag from iTunes into Garageband anymore

    After routine software update to iTunes 10 and Garageband 09 version 5.1,  I can no longer drag files from iTunes into Garageband, whereas I always could before. Whats wrong?  I'm using an iMac with a 3.06 GHz Intel Core 2 Duo Processor with 4 GB 106

  • HT4528 Help. New ID. What will I have to pay for?

    I changed my email address and did not know that I could not change my phone and pad to the new ID. I just wanted to get rid on the old email. I cant reactivate it I have tried. I read in here that if I do change my phone and pad to the new ID I will

  • Substr function fails

    hi there, can anyone tell me why a simple substr would throw an error if the target varchar assignment isn't three characters larger than the length provided to the substr function? can anyone tell me why this: declare v_legal varchar2(200); v_disp_t