4-tiers architecture

Hello everybody
Can we said that we have a 4-tiers architecture whene we work with mapviewer (to show maps) and tomcat(to administrate data base as insert update and others) ?
thank's

According to your description, it seems to me that you have a 3-tiered architecture. You have the presentation (web browser + client side APIs), application (MapViewer server and other middle-tier applications), and the data tier (Oracle DB). Your Tomcat task could be a part of the presentation tier (with UI in a browser) and a part of application (data management) tier.
You might find more information in:
http://en.wikipedia.org/wiki/Multitier_architecture
jack

Similar Messages

  • Two-tiered Architecture Setup

    Hi All,
    I need some help with setting up a two-tiered messaging architecture. I want to have a front end messaging server in a DMZ with a'real' IP-Address and an official FQDN (mx1.external.org) and a back end messaging server as mail store with a private IP-Address and private FQDN (mail.internal.pri). The front end server should do some spam/virus checking and then send the emails over to the internal server. Between the two is a packet filtering firewall. This setup works perfect with two Postfix servers but I want to try out messaging server so I wanted to ask for some documentation how this can be done. I already followed the documentation in chapter 16 (LMTP) of the messaging server administration guide, directory server and delegated administrator for user provisioning also work, but I obviously have difficulties in understanding the interrelationship between "mail host" "preferred mail host" and the FQDN of the external and internal messaging servers. In my current configuration I get
    Recipient address: @mail.internal.pri.lmtp:uid@lmtpcs-daemon
    Original address: [email protected]
    Reason: Illegal host/domain name found
    from the external server which I understand because of the private "pri" top level domain name. But when I change the mailhost to the external server the external server says (correctly) that the mailbox is on another host.
    Can someone point me to some documentation where I can learn about all this or has a similar set up and can help?
    Thank you.
    Greetings,
    Willi

    >
    Please always provide the exact version of Messaging Server that you are running (./imsimta version).
    I need some help with setting up a two-tiered messaging architecture. I want to have a front end messaging server in a DMZ with a'real' IP-Address and an official FQDN (mx1.external.org) and a back end messaging server as mail store with a private IP-Address and private FQDN (mail.internal.pri). The front end server should do some spam/virus checking and then send the emails over to the internal server. Between the two is a packet filtering firewall. This setup works perfect with two Postfix servers but I want to try out messaging server so I wanted to ask for some documentation how this can be done. I already followed the documentation in chapter 16 (LMTP) of the messaging server administration guide, directory server and delegated administrator for user provisioning also work, but I obviously have difficulties in understanding the interrelationship between "mail host" "preferred mail host" and the FQDN of the external and internal messaging servers.The "mailhost:" LDAP attribute is used by the MTA to route emails to the system which responsible for storing the user's email messages.
    The "preferredmailhost:" LDAP attribute is used by Delegated Administrator to determine the default user mailhost: attribute value when a new user is created.
    In my current configuration I get
    Recipient address: @mail.internal.pri.lmtp:uid@lmtpcs-daemon
    Original address: [email protected]
    Reason: Illegal host/domain name foundDo you have an entry in your DNS/hosts file for "mail.internal.pri"?
    Regards,
    Shane.

  • Number of scokets for httpsession replication in multi-tiered architecture

              In the weblogic's document on clustering the
              section "Configuration Notes for Multi-tier Architecture" states
              In particular, during peak socket usage, each WebLogic Server in
              the cluster that hosts servlets and JSPs may potentially use a
              maximum of:
              Two sockets for replicating HTTP session states between primary
              and secondary servers.
              I don't understand why 2 sockets are required. Won't just one do the job. What's
              the 2nd one for?
              pradeep
              

              hi,
              The doc mentions toally 3 sockets. I have pasted the doc
              content FYI
              "In particular, during peak socket usage, each WebLogic Server in the cluster
              that hosts servlets and JSPs may potentially use a maximum of:
              Two sockets for replicating HTTP session states between primary and secondary
              servers, plus
              One socket for each WebLogic Server in the EJB cluster, for accessing remote objects
              Toally 3 sockets. 2 for httpsession replication and 1 for accessing remote objects.
              Why 2 sockets are required for replicating HTTP session states between primary
              and secondary servers
              pradeep
              Kumar Allamraju <[email protected]> wrote:
              ><!doctype html public "-//w3c//dtd html 4.0 transitional//en">
              ><html>
              >the docs also says
              ><p>One socket for replicating HTTP session states between primary and
              >secondary
              >servers, plus
              ><p>One socket for each WebLogic Server in the EJB cluster, for accessing
              >remote objects
              ><p>pradeep bhat wrote:
              ><blockquote TYPE=CITE>In the weblogic's document on clustering the
              ><br>section "Configuration Notes for Multi-tier Architecture" states
              ><p>In particular, during peak socket usage, each WebLogic Server in
              ><br>the cluster that hosts servlets and JSPs may potentially use a
              ><br>maximum of:
              ><br>Two sockets for replicating HTTP session states between primary
              ><br>and secondary servers.
              ><p>I don't understand why 2 sockets are required. Won't just one do the
              >job. What's
              ><br>the 2nd one for?
              ><p>pradeep</blockquote>
              ></html>
              >
              

  • Tiered Architecture and Clustering

    Hi,
              Was wondering if anyone had problems setting up clustering in a
              tiered environment. What we are looking to do is setup a load balancer
              in front of a weblogic presentation tier (Servlets and JSP) that in
              turn connect to a clustered application tier of EJB's. Just some
              random thoughts:
              1) Does the replica aware stub live in the presentation tier?
              2) Does it make sense to put another load balancer between tier's? Or
              just let the stub handle the load balancing?
              Curious to know if anyone else has setup a similar configuration.
              Regards,
              Rick Mitterer
              

    I have seen several such setups. You do not usually have a dedicated
              hardware load balancer between the JSP/Servlet container and the EJB
              container. The replica-aware stub is where the EJB client is, which is the
              JSP/Servlet container.
              Peace,
              Cameron Purdy
              Tangosol Inc.
              << Tangosol Server: How Weblogic applications are customized >>
              << Download now from http://www.tangosol.com/download.jsp >>
              "Rick Mitterer" <[email protected]> wrote in message
              news:[email protected]..
              > Hi,
              > Was wondering if anyone had problems setting up clustering in a
              > tiered environment. What we are looking to do is setup a load balancer
              > in front of a weblogic presentation tier (Servlets and JSP) that in
              > turn connect to a clustered application tier of EJB's. Just some
              > random thoughts:
              >
              > 1) Does the replica aware stub live in the presentation tier?
              >
              > 2) Does it make sense to put another load balancer between tier's? Or
              > just let the stub handle the load balancing?
              >
              > Curious to know if anyone else has setup a similar configuration.
              >
              > Regards,
              > Rick Mitterer
              

  • Developer 6.0.5 three-tiered architecture application deploy on web

    Sir/Miss,
    There are two questions.
    1.In browser I can't use 'button'.
    2.In browser I can't use 'treeview'
    Server
    Operating System Windows NT 4.0 with service pack 3
    Netscape Communicator 4.04 (with the JDK 1.1 patch)
    4.7
    Oracle RDBMS 8.0.5
    Oracle Application Server 4.0.7.0.0+4.0.7.1.0 patch
    Oracle Developer Server 6.0
    Client
    Operating System Windows 95 or Windows NT 4.0
    Netscape Communicator 4.04 (with the JDK 1.1 patch) or
    4.7
    Oracle JInitiator 1.1.7.11

    May be I can answer part of the questions. First, when you use Developer 6 to create either Forms or Reports, the application can run under either client/server or web environment. To run it under web environment, you need to have OAS as middle tier. The front end only need to have jave-enabled browser. JInitiator act as a bridge in between. You need only have Server side deployment.
    For JDeveloper, it generates java class for you, it requires you understand java, write some java codes.
    For neither tool, you don't need any client side or server-side license. Just the cost of the tool.
    null

  • N-tiered programming in Java

    I'm new to Java/JSP and am looking for tutorials and examples of how to do web n-tiered programming in Java. In .NET, I've often separated the web presentation layer from the business logic and data access layers. How do you accomplish this in Java? Thank you.

    What do you mean "accomplish"? You write java classes. N-tiered architecture is not limited to specific programming languages. It is a paradigm.
    Additionally, check this out: http://java.sun.com/blueprints/corej2eepatterns/Patterns/

  • SSO using Kerberos with SAP Logon Tickets

    Hi,
    I am creating a Repository Manager for the Portal Knowledge Management System and I want to use SSO to a backend IIS application and I have a few questions here. 
    I have a three tiered architecture. 
    A.  The presentation tier (SAP Portal which has my Repository Manager implementation)
    B.  ASP.NET web service data layer.
    C.  Backend document management system which runs on IIS. 
    I have installed the ISAPI filter on my ASP.NET application server and have enabled this HOST account for delegation in MSAD 2003.   Server B will use Kerberos constrained delegation to access Server C, which is an IIS backend server. 
    My question is how do I pass an SAP Logon Ticket to an ASP.NET web service request from my Repository Manager implementation?  Basically how do I just make an HTTP request to an ASP.NET application from some portal iView or WebDynPro code and pass along the SAP Logon Ticket in the request so it can be interpreted by the ISAPI filter on the IIS server.  Does anyone have any sample code or an application here that does this?
    Thanks,
    Scott

    Hi Scott
    Did you managed to find out anything regarding how to pass SAP Logon ticket to ASP.NET Webservice. Can you share it with me?
    regards
    ram

  • How deploy 6i Forms Fmx trought Application Server 10g

    Hello every one,
    Situation :
    In my company, we have a software developed by a team using Oracle Forms 6i, the application is connected te Oracle 9i Database and every thing works well.
    Problem :
    I was able to upgrade the database from 9i to 10g. But my question : is it possible to deploy all the Forms (fmx 6i version) on Application Server 10g. I want to use the 3-tiers architecture.
    Take note that i have not the fmb source files. Just the FMX (about 40 files)
    If yes, can you explain also how to proceed (tools to install on Application Server) and how to setup the connection between the 3 tiers throught a LOCAL NETWORK.
    Regards
    Kira

    You will at least need to recompile all those *.fmb to *.fmx in the version of the App server you will be using.
    We have a forms6 c/s application we are migrating to forms 11g and it almost that easy (except for changes in how you launch reports).
    The application server is setup alot like the client/server side of forms is, it needs a tnsnames.ora that points to the database and it needs to know where you will be storing the forms and report. Looks for any *.fmx and that will be the default location.

  • Update row in a table based on join on multiple rows in another table

    I am using SQL Server 2005. I have the following update query which is not working as desired.
    UPDATE DocPlant
    SET DocHistory = DocHistory + CONVERT(VARCHAR(20), PA.ActionDate, 100) + ' - ' + PA.ActionLog + '. '
    FROM PlantDoc PD INNER JOIN PlantAction PA on PD.DocID = PA.DocID AND PD.PlantID = PA.PlantID 
    For each DocID and PlantID in PlantDoc table there are multiple rows in PlantAction table. I would like to concatenate ActionDate and ActionLog information into DocHistory column of DocPlant table. But the above update query is considering only one row from
    PlantAction table even though there are multiple rows that match with DocID and PlantID.
    DocHistory column is of type NVARCHAR(MAX).
    How do I fix my query to achieve what I want ? Thanks for the help.

    UPDATE DocPlant
    SET DocHistory = DocHistory + CONVERT(VARCHAR(20), PA.ActionDate, 100) + ' - ' + PA.ActionLog + '. '
    FROM PlantDoc PD INNER JOIN PlantAction PA on PD.DocID = PA.DocID AND PD.PlantID = PA.PlantID 
    We do not use the old Sybase UPDATE..FROM.. syntax. Google it and learn how it does not work. We do not use the old Sybase CONVERT() string function. You are still writing 1950's COBOL with string dates instead of temeproal data types. 
    You also did not post DDL, so we have to guess about everything. Does your boss make you work without DDL? How do you do it? 
    >> For each DocID and PlantID in PlantDoc table there are multiple rows in PlantAction [singular name?] table. I would like to concatenate ActionDate and ActionLog information into DocHistory column of DocPlant table. <<
    Why? What does this new data element mean? This is like dividing Thursday by Red and expecting a reasonable answer. Now, non-SQL programmers who are still writing COBOL will violate the tiered architecture rule about doing display formatting in the database.
    If you will follow forum rules, we can help you. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Getting the data for last month of every year

    Hi,
           How to declare the date if we want to pull the data from the December of every year.
    For example if the query is run in march 2015 and they want the historical data it should pull only the data from dec 2014.
    In the same way if they ran the query in future jun 2016 and if they want to historical data it should pull only data from dec 2015.
    It should not coded manually. Please help me with date format that need to used.
    BALUSUSRIHARSHA

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    A table has to have a key to be table.  Here is my guess at a repair job: 
    CREATE TABLE Test_Data
    (pu_id INTEGER NOT NULL
      REFERENCES PU(pu_id),
     pu_date DATE DEFAULT CURRENT_TIMESTAMP NOT NULL,
     PRIMARY KEY (pu_id, pu_date),
     x_count INTEGER,
     y_count INTEGER);
    Identifiers are not numeric in a good schema. What math do you do with them? They are also the key in the table that models the entity they identify. Where is the PU table (and what is a PU anyway)? 
    INSERT INTO Test_Data
    VALUES
    (28, '2014-01-01', 10, 20), -- crap! No key in this mess!! 
    (28, '2015-01-01', 30, 20), -- 
    (28, '2014-12-12', 10, 20), 
    (28, '2015-02-02', 10, 20);
    A PIVOT is not a query and not even part of SQL. It is how Microsoft programmers who do not know RDBMS or have a report writer violate the tiered architecture of SQL. We also do not use XML mixed in SQL. It is a bitch to maintain, has poor performance and again
    violates the tiered architecture principle. 
    A query is not sorted because it is a table. A file in COBOL can be sorted and that seems to be what you really want to write. 
    Old COBOL love the Sybase CONVERT() string function to avoid SQL temporal data. 
    We never use SELECT * in production code; Google it. Not only are you generating code, you are generation bad code. 
    Since SQL is a database language, we prefer to do look ups and not calculations. They can be optimized while temporal math messes up optimization. A useful idiom is a report period calendar that everyone uses so there is no way to get disagreements in the DML.
    The report period table gives a name to a range of dates that is common to the entire enterprise. 
    CREATE TABLE Month_Periods
    (month_name CHAR(10) NOT NULL PRIMARY KEY
       CHECK (month_name LIKE <pattern>),
     month_start_date DATE NOT NULL,
     month_end_date DATE NOT NULL,
      CONSTRAINT date_ordering
        CHECK (month_start_date <= month_end_date),
    etc);
    These report periods can overlap or have gaps. I like the MySQL convention of using double zeroes for months and years, That is 'yyyy-mm-00' for a month within a year and 'yyyy-00-00' for the whole year. The advantages are that it will sort with the ISO-8601
    data format required by Standard SQL and it is language independent. The pattern for validation is '[12][0-9][0-9][0-9]-00-00' and '[12][0-9][0-9][0-9]-[01][0-9]-00'
    This will port and waste time calling string function row by row. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Saving result from sp_executesql into a variable and using dynamic column name - getting error "Error converting data type varchar to numeric"

    Im getting an error when running a procedure that includes this code.
    I need to select from a dynamic column name and save the result in a variable, but seem to be having trouble with the values being fed to sp_executesql
    DECLARE @retval AS DECIMAL(12,2)
    DECLARE @MonthVal VARCHAR(20), @SpreadKeyVal INT
    DECLARE @sqlcmd AS NVARCHAR(150)
    DECLARE @paramdef NVARCHAR(150)
    SET @MonthVal = 'Month' + CAST(@MonthNumber AS VARCHAR(2) );
    SET @SpreadKeyVal = @SpreadKey; --CAST(@SpreadKey AS VARCHAR(10) );
    SET @sqlcmd = N' SELECT @retvalout = @MonthVal FROM dbo.CourseSpread WHERE CourseSpreadId = @SpreadKeyVal';
    SET @paramdef = N'@MonthVal VARCHAR(20), @SpreadKeyVal INT, @retvalout DECIMAL(12,2) OUTPUT'
    --default
    SET @retval = 0.0;
    EXECUTE sys.sp_executesql @sqlcmd,@paramdef, @MonthVal = 'Month4',@SpreadKeyVal = 1, @retvalout = @retval OUTPUT;
    SELECT @retval
    DECLARE @return_value DECIMAL(12,2)
    EXEC @return_value = [dbo].[GetSpreadValueByMonthNumber]
    @SpreadKey = 1,
    @MonthNumber = 4
    SELECT 'Return Value' = @return_value
    Msg 8114, Level 16, State 5, Line 1
    Error converting data type varchar to numeric.

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> I need to select from a dynamic column name and save the result in a variable, but seem to be having trouble with the values being fed to sp_executesql <<
    This is so very, very wrong! A column is an attribute of an entity. The idea that you are so screwed up that you have no idea if you want
    the shoe size, the phone number or something else at run time of this entity. 
    In Software Engineering we have a principle called cohesion that says a model should do one and only one task, have one and only one entry point, and one and only one exit point. 
    Hey, on a scale from 1 to 10, what color is your favorite letter of the alphabet? Yes, your mindset is that level of sillyity and absurdity. 
    Do you know that SQL is a declarative language? This family of languages does not use local variables! 
    Now think about “month_val” and what it means. A month is a temporal unit of measurement, so this is as silly as saying “liter_val” in your code. Why did you use “sp_” on a procedure? It has special meaning in T-SQL.  
    Think about how silly this is: 
     SET @month_val = 'Month' + CAST(@month_nbr AS VARCHAR(2));
    We do not do display formatting in a query. This is a violation of at the tiered architecture principle. We have a presentation layer. But more than that, the INTERVAL temporal data type is a {year-month} and never just a month. This is fundamental. 
    We need to see the DDL so we can re-write this mess. Want to fix it or not?
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Publication of the big-sized Enterprise projects from Project Professional 2013 to Project Server 2013 takes about 60+ minutes.

    Dear Sirs,
    I need your support over the following MS EPM 2013 issue:
    Publication of the big-sized Enterprise projects from Project Professional 2013 to Project Server 2013 takes about 30+ minutes. We
    need to reduce this total publication time down to acceptable working values 10+- minutes.
    Environment information:
    Single App Server (Virtual): 16 Gb RAM, x64 4xCPU, HDD > 50 GB free disk space, OS Windows Server 2012 Standard Edition x64 Service
    Pack 1, MS SharePoint Server 2013 and MS Project Server 2013 with CU December 2013 (KB 2850024) applied.
    Single RDBMS MS SQL Server (Virtual): 8 Gb RAM, x64 4xCPU, HDD > 200 GB free space, OS Windows Server 2012 Standard Edition x64 Service
    Pack 1, MS SQL Server 2012 x64 SP 1 Enterprise Edition.
    We have 1Gbit LAN between APP, DB server and 1Gbit LAN between APP and Proj Prof Client.
    Yes, we are on the way of migrating to the Prod environment
    with 3-tiered architecture (with SP1 slipstream and CU December 2014 applied), but this issue also
    presents there.
    Project’s file information:
    Tasks in the file: [~4900], resources in the file [~396] enterprise task’s custom fields used in the file [~23].
    Project save procedure for this new project would last about 7 minutes. Project publication would last about 47 minutes. We noticed that
    tasks synchronization process took about 1 second for each ~2,5 tasks, to add them to the sharepoint tasks list. So for all 5148 tasks it took about 5148/3/60 =  34 minutes. Other 13 min was used for reporting database publication and other tasks relevant
    for new sharepoint site creation.
    Case 1: Issue description:
    During the Enterprise project’s file save and publication we have the following sharepoint 2013 log messages:
    07.31.2014 12:43:17.22 Microsoft.Office.Project.Server (0x0358) 0x3D5C SharePoint Foundation Monitoring b4ly High Leaving Monitored Scope
    (Persisting list changes). performing time =376.068676326181 22dca99c-4696-70f1-e9e2-06851d0bcffd
    07.31.2014 12:43:17.69 Microsoft.Office.Project.Server (0x0358) 0x3D5C SharePoint Foundation Monitoring b4ly High Leaving Monitored Scope
    (Persisting list changes). performing time =361.652807828928 22dca99c-4696-70f1-e9e2-06851d0bcffd
    It shows that sharepoint spend at least ~350 milliseconds (or 0,35 sec*4900 tasks = 1715 sec, or 28,5 min) for each task update during
    project publication. And we also have another log file that shows that about 0,7 sec (or 0,7 sec*4900 tasks = 3430 sec, or 57 min) sharepoint spend for save each task in project file to project server. So total save and publication time more then 60+ minutes
    for that project file. The same result we have even if user didn’t do any changes at the project file.
    We use only enterprise projects (dbo.MSP_EpmProject_UserView.projectvisibilitymode = «False»), and do not use sharepoint tasks lists,
    but the synchronization between MSP Plan and SharePoint tasks list works at any case.
    Case 2: Issue description:
    - For the second test we created a new project with new sharepoint project’s site on basis of our «issue» project, with total amount
    of tasks in it of 5148 (yes, we increased the tasks list default limit at the sharepoint site up to 6000 items in it – standard limits for sharepoint view list – 5000 items).
    - Project save procedure for this new project would last about 7 minutes. Project publication would last about 47 minutes. We noticed
    that tasks synchronization process took about 1 second for each ~2,5 tasks, to add them to the sharepoint tasks list. So for all 5148 tasks it took about 5148/3/60 =  34 minutes. Other 13 min was used for reporting database publication and other tasks
    relevant for new sharepoint site creation.
    -Then we deleted the
    tasks list for that new test project from the sharepoint site and republish the project plan one more time. This time project save procedure took about 7 minutes, project publication about 2 minutes and 3 minutes for other relevant queue jobs. So total time
    is 12 minutes.
    As a conclusion: yes, we have determined
    the exact problem - during synchronization process (from Project Server to SharePoint) it perform copying all tasks and related data from Project to SharePoint in spite of fact that you changed only ONE task or ALL of them. At any case, synchronization will
    copy ALL of them from Project Server to SharePOint task’s list.
    Our workaround is to disable the task’s synchronization for such big-sized project plans:
    – to delete the SharePoint «tasks» list at the SharePoint site tied with project plan.
    - or deattach the SharePoint site from the project plan.
    Thank you for reading this topic, please if you also forced with such issue provide us any known workaround or maybe any official response
    \ feedback from MS about it.
    Thank you in advance,
    Best Regards, Andrey

    Regarding my topic, I also said that every time when sync works it updates All items from project’s plan at Project Server to corresponded task’s list at SharePOint server. Inspire of the fact that you changed only One task or group /
    all of them at your project’s plan.
    And it seems to me and my colleagues that it’s probably (maybe) a “bug” at the product. Here is what we have if looked a little bit closer to the code:
    Share Point determines what task to sync from Project’s plan to sharepoint list. To do that sharepoint needs to know was that task changed or not, based on the following fileds (check SQL stored procedure “[MSP_READ_TASKS_FOR_SYNCRONIZATION]”):
    TASK_UID    TASK_NAME    TASK_START_DATE    TASK_FINISH_DATE    TASK_PCT_COMP    TASK_PARENT_UID    TASK_OUTLINE_NUM   
    WSS_LISTITEM_UID   TASK_ID    TASK_IS_ACTIVE
    We noticed that at any case synchronization performs for all tasks, EXCEPT the ROOT one. Then we looked at the comparison of TASK_PARENT_UID field. So sharepoint compares TASK_PARENT_UID with ParentID (this is internal name for lookup
    field “Tasks” at the Sharepoint, and it stores their values at the format "ID;#Title").
    And comparison performs like following:
    SharePoint looks for Task at the Tasks’s list corresponded to Project’s plan with ID represented at the TASK_PARENT_UID field. Then it takes SharePoint ListItem ID (“int” type) and store it to the “num” parameter;
    num = this.GetCachedListItemByUniqueId(listItem.ParentList, nullable.Value).ID;
       2.Then it compares “num” with task’s “ParentID” at SharePOint as follow with operator “!=”:
    ((SPItem) listItem)["ParentID"] != (System.ValueType) num
       3. If comparison was success (true) – then it tell us that values (at the Project’s plan for tasks) was changed, then it need to be synchronized. Corresponded Method setup “true” flag, and then returns it.
    The “bug” is that this expression at the Step 2 will always return “true”, because in fact it compares “string” (see above – that this is lookup field at SharePoint side)
    with “number”. For example if the parant task ID is “55”, then we get:
    "55;#Task 1" != 55
    And by the rules of .Net the “string” will never equal “number”
    Furthermore this is approved by the SharePoint logs:
    In that case we always get the note “Setting ParentID to” at the logs (we see it if turns on Verbose for “Project Server” -> “Sharepoint Integration” category).
    So at any case of publishing project’s plan we always get that note at the logs for tasks that have Parent task, and we have Parent for all of them EXCEPT the ROOT one, exact logs represented further:
    10/15/2014 02:37:32.26    Microsoft.Office.Project.Server (0x07D8)    0x06E8    Project Server    Sharepoint Integration    ado0d   
    Verbose    Setting ParentID to 1    bf2fc29c-7727-b00d-fa4a-34f22ea9ec1d 10/15/2014 02:37:32.62    Microsoft.Office.Project.Server (0x07D8)    0x06E8   
    Project Server    Sharepoint Integration    ado0d    Verbose   
    Setting ParentID to 1    bf2fc29c-7727-b00d-fa4a-34f22ea9ec1d 10/15/2014 02:37:32.63    Microsoft.Office.Project.Server (0x07D8)    0x06E8    Project Server   
    Sharepoint Integration    ado0d    Verbose   
    Setting ParentID to 1    bf2fc29c-7727-b00d-fa4a-34f22ea9ec1d 10/15/2014 02:37:32.67    Microsoft.Office.Project.Server (0x07D8)    0x06E8    Project Server   
    Sharepoint Integration    ado0d    Verbose   
    Setting ParentID to 1    bf2fc29c-7727-b00d-fa4a-34f22ea9ec1d 10/15/2014 02:37:32.69    Microsoft.Office.Project.Server (0x07D8)    0x06E8    Project Server   
    Sharepoint Integration    ado0d    Verbose   
    Setting ParentID to 5    bf2fc29c-7727-b00d-fa4a-34f22ea9ec1d
    The following is the complete Method’s code from the corresponded reflector:
    private bool UpdateParentID(DataSet taskDS, DataRow row, SPListItem listItem, Dictionary<Guid, SPListItem> redoEntries)
    bool flag = false;
    int index = taskDS.Tables[0].DefaultView.Find((object) DataRowExtensions.Field<Guid>(row, "TASK_PARENT_UID"));
    if (index >= 0)
    Guid? nullable = DataRowExtensions.Field<Guid?>(taskDS.Tables[0].DefaultView[index].Row, "WSS_LISTITEM_UID");
    int num = -1;
    if (listItem.Fields.ContainsField("ParentID"))
    if (nullable.HasValue)
    try
    // STEP 1
    num = this.GetCachedListItemByUniqueId(listItem.ParentList, nullable.Value).ID;
    catch (ArgumentException ex)
    if (redoEntries != null)
    if (!redoEntries.ContainsKey(DataRowExtensions.Field<Guid>(row, "TASK_UID")))
    redoEntries.Add(DataRowExtensions.Field<Guid>(row, "TASK_UID"), listItem);
    //STEP 2
    if (num != -1 && ((SPItem) listItem)["ParentID"] != (System.ValueType) num)
    ((SPItem) listItem)["ParentID"] = (object) num;
    ULS.SendTraceTag(845443U, (ULSCatBase) ULSCat.msoulscat_PS_ProjectSharepointIntegration, ULSTraceLevel.Verbose, "Setting ParentID to {0}", new object[1]
    ((SPItem) listItem)["ParentID"]
    //STEP 3
    flag = true;
    else if (((SPItem) listItem)["ParentID"] != null)
    ((SPItem) listItem)["ParentID"] = (object) null;
    ULS.SendTraceTag(2495056U, (ULSCatBase) ULSCat.msoulscat_PS_ProjectSharepointIntegration, ULSTraceLevel.Verbose, "Resetting ParentID to null");
    flag = true;
    return flag;
    Any thoughts about it would be much appreciated!

  • Printing/report writing

    Could anyone recommend any good printing/report writing tools that can be
    integrated with Forte? I have found these capabilities to be less than
    satisfactory from within Forte itself.
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Hi Matt,
    We have a enterprise strength report writing product (Report Workshop
    for Forte) built in Forte having native support for Integrating with
    Forte Applications. You can visit our website
    http://www.indcon.com/products for further information. This product is
    now available for evaluation and sale.
    Report Workshop For Forte:
    Report Workshop for Fort&eacute;TM is an enterprise strength, distributed,
    report development and management environment. Report Workshop is a user
    friendly, adaptable, scaleable and versatile environment to develop and
    distribute reports. It has the capability to scale with increasing load
    and makes optimal use of resources owing to its server-centric,
    multi-tiered architecture. It supports multiple report formats, multiple
    RDBMS and even non-relational data.
    Fort&eacute; is ingrained in Report Workshop, providing seamless integration
    with Fort&eacute; applications. Report Workshop also leverages Fort&eacute;'s
    capability of providing a scaleable architecture for distributed
    business applications.
    Report Workshop Capabilities
    WYSIWYG Report Development Environment
    *Browse distributed database schema in easy graphical way
    *Jump start with default report formats
    *Override report formats to suit specific needs with point and click
    ease
    *Preview reports with actual data
    *Iterate above steps until perfection is reached
    Sever Based Enterprise Strength Reporting
    *N-tiered scaleable application
    *Share the report objects
    *Execute once and share the reports among end users
    *Optimizes database connections
    *Minimal network traffic with capability of shipping one report page at
    a time
    Distribute reports with state-of-the-art distribution channels
    *E-mail
    *Publish HTML on Web
    *Network printing
    *View it with viewer
    *Save in Excel format for further analysis
    Schedule Management
    *Create schedules for periodic execution and distribution
    *Customize schedules to suit your organization's holiday plan
    *View history of schedule runs
    Version Management
    *Retain report results for future use
    *Define purge policy
    *View/Print/E-mail versioned reports
    Native Fort&eacute; Application Program Interface
    *Integrate your Fort&eacute; application with Report Workshop
    Rich Features
    Support for multiple report formats
    *Tabular
    *Grid
    *Group
    *Free
    *Composite
    Support for multiple data sources
    *SQL (Oracle, Sybase, ODBC, DB2, Ingres and Rdb Databases)
    *External Data Source ( Forte Applications)
    .CORBA Objects
    Client and server based printing (on NT servers)
    For additional information about Report Workshop for Fort&eacute;, please feel
    free to contact us. 
    An evaluation copy of Report Workshop is available and can be downloaded
    from the Internet.
    Indus Consultancy Services
    140, E.Ridgewood Ave.
    Paramus, NJ 07661
    www.indcon.com
    Phone: 201-261-3100
    - Pradnesh Dange
    From: Matt Luce[SMTP:[email protected]]
    Reply To: Matt Luce
    Sent: Wednesday, March 03, 1999 3:33 PM
    To: [email protected]
    Subject: printing/report writing
    Could anyone recommend any good printing/report writing tools that can
    be
    integrated with Forte? I have found these capabilities to be less
    than
    satisfactory from within Forte itself.
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive
    <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • How to use convert function in my issue

    CASE WHEN RATE_CODE='BS' then  '0.00' ELSE ((COST)/TOTAL_UNITS)end AS COST this function is showing when rate_code='Bs' then 0.0000. i want only 0.00 in this. 
    thank you all
    actually my query is getting when RATE_CODE='BS' IS THEN 0.0000.but i want this value 0.00 to display.how can i achive

    We do not do display in the database; we have a presentation layer. This is the most basic principle of any tiered architecture. I will guess you have been programming for a only a few weeks, but t6his is usually covered in the
    early part of any course. 
    You probably need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • CLOB / Charset / Java / Unix Issue

    Hi,
    I'm encountering the following problem.
    I'm working on a 3-tiers architecture with an Oracle Database (8.1.7)
    a Weblogic application server 6.1 SP4 and a Web server under Aix 4.3
    (all 3 are under AIX 4.3 on the same platform).
    My application has a web interface that allows users to upload files
    to the server from their PC clients and a webbrowser, that insert each
    file into a oracle Clob (via Java Code) and that call a stored
    procedure (with java code again) to extract this clob to a file
    (UTL_file package), then, the extracted file is processed line by line
    and information inserted in others tables.
    The issue is that some characters (acute, grave accent ....etc)
    appears as question marks in the database or that some date from the
    file can't be processed because they are structured as DD/MM/YYYY
    (french notation, but that normal I'm from france).
    I first thougt of an NLS_LANG problem but on the weblogic server it is
    set to french_france.WE88859P15 that seems to be right and the same
    the database configuration.
    I then tried to performs some conversion when the clob data were
    extracted to file (from WE88859P15 TO CP152 or vice-versa) with the
    oracle convert function but it seems that it doesn't work.
    An other but coherent symptoms is that the extracted files (from the
    clob columns) seems not to be fine (accent are not recognized).
    This is the java code used to load file to clob (on the weblogic
    server side)
    con = dbHandle.getAdminConnection();
    con.setAutoCommit(false);
    /// NEW IMPORT
    // int taskId = DBTools.getOraSeqValue("vtr.VTR_SEQ_LOG_IMPORT",
    DBTools.NEXTVAL,con);
    int taskId = DBTools.getOraSeqValue(SqlQueryDefinition.seqLogImport.toString(),
    DBTools.NEXTVAL,con);
    Debug.out.println("taskId " + taskId);
    // String cmd = "insert into vtr.vtr_log_import
    (cod_task,DTE_DEBUT,lob_imp,lob_rej,lob_log, txt_nom_fic_orig,
    txt_utilisateur) " +
    // "values ("+ taskId
    +",sysdate,empty_clob(),empty_clob(),empty_clob(), '"+file+"','"+
    ((UserBean)request.getSession().getAttribute("userbean")).getIdentifier()+"')";
    // stmt = con.createStatement();
    // stmt.executeQuery(cmd);
    // stmt.close();
    pstmt = con.prepareStatement(SqlQueryDefinition.initLigneImport.toString());
    pstmt.setInt(1,taskId);
    pstmt.setString(2,file);
    pstmt.setString(3,((UserBean)request.getSession().getAttribute("userbean")).getIdentifier());
    pstmt.executeQuery();
    pstmt.close();
    con.commit();
    // Writing CLOB
    // cmd = "SELECT cod_task,lob_imp,lob_rej,lob_log FROM
    vtr.vtr_log_import WHERE cod_task="+ taskId +" for update";
    // stmt = con.createStatement();
    // rset = stmt.executeQuery(cmd);
    pstmt = con.prepareStatement(SqlQueryDefinition.setBlobImport.toString());
    pstmt.setInt(1,taskId);
    rset = pstmt.executeQuery();
    rset.next();
    File csvFile = new File(localFile);
    System.out.println("csvFile length = " + csvFile.length());
    File unixFile = new File(localFile+".ux");
    Tools.dos2Unix(csvFile, unixFile);
    FileInputStream instream = new FileInputStream(unixFile);
    // support Weblogic
    clob = ClobComponent.factory(DBUtil.getInstance().isWebLogicPlatform());
    clob.setClob(rset,2);
    outstream = clob.getAsciiOutputStream();
    size = clob.getBufferSize();
    byte[] buffer = new byte[size];
    int length = -1;
    while ((length = instream.read(buffer)) != -1)
    outstream.write(buffer, 0, length);
    instream.close();
    outstream.close();
    rset.close();
    // stmt.close();
    pstmt.close();
    rset=null;
    // stmt = null;
    pstmt=null;
    con.commit();
    // IMPORT
    cs = con.prepareCall(SqlQueryDefinition.importStoredProc.toString());
    index = 1;
    cs.setString(index++, fullPath); // 1
    cs.setString(index++,
    ((UserBean)request.getSession().getAttribute("userbean")).getIdentifier());
    // 2
    cs.registerOutParameter(index++,java.sql.Types.VARCHAR); // 3
    cs.registerOutParameter(index++,java.sql.Types.VARCHAR); // 4
    cs.registerOutParameter(index++,java.sql.Types.NUMERIC); // 5
    cs.setInt(index++, taskId); // 6
    cs.executeQuery();
    String fichier1 = cs.getString(3);
    String fichier2 = cs.getString(4);
    int returnCode = cs.getInt(5);
    System.out.println("returnCode/fichier1/2 : " + returnCode + " & "
    + fichier1 + " & " + fichier2);
    cs.close();
    con.commit();
    This is the PL/SQL code used to unload clob to dile (on the oracle
    side)
    PROCEDURE writeToFile (id NUMBER, a_fichier VARCHAR2)
    IS
    result CLOB;
    cvl_tmp VARCHAR2 (32000);
    nvl_amount NUMBER := 250;
    nvl_pos NUMBER := 1;
    nvl_clob_length NUMBER;
    instr_pos NUMBER;
    file_handle UTL_FILE.file_type;
    BEGIN
    file_handle := UTL_FILE.FOPEN(
    substr(a_fichier, 1, instr(a_fichier, file_separator, -1,
    1)-1), -- dir
    substr(a_fichier, instr(a_fichier, file_separator, -1, 1)+1),
    -- file
    'W');
    select lob_imp
    INTO result
    from vtr_log_import
    where cod_task = id;
    --write clob to file
    nvl_clob_length := DBMS_LOB.getlength (result);
    cvl_tmp := NULL;
    nvl_amount := 250;
    nvl_pos := 1;
    LOOP
    instr_pos :=
    DBMS_LOB.INSTR (result, CHR (10), nvl_pos, 1) -
    nvl_pos;
    --DBMS_OUTPUT.PUT_LINE(nvl_pos||': Of length : '||instr_pos);
    IF nvl_pos + instr_pos > nvl_clob_length
    THEN
    instr_pos := nvl_clob_length - nvl_pos;
    DBMS_LOB.READ (
    lob_loc=> result,
    amount=> instr_pos,
    offset=> nvl_pos,
    buffer=> cvl_tmp
    EXIT;
    END IF;
    DBMS_LOB.READ (
    lob_loc=> result,
    amount=> instr_pos,
    offset=> nvl_pos,
    buffer=> cvl_tmp
    -- DBMS_OUTPUT.PUT_LINE(cvL_tmp);
    cvl_tmp := CONVERT(cvl_tmp, 'WE8MSWIN1252', 'WE8ISO8859P15');
    UTL_FILE.put_line (file_handle, cvl_tmp);
    nvl_pos := nvl_pos
    + instr_pos
    + 1;
    IF nvl_pos > nvl_clob_length
    THEN
    EXIT;
    END IF;
    END LOOP;
    UTL_FILE.fclose (file_handle);
    END writeToFile;
    I'm using the oracle thin driver but it's not set in classpath maybe a
    problem with that ?
    <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
    InitialCapacity="1" MaxCapacity="100" Name="oracleUserPool"
    Password="XXXXXXX
    Properties="user=vtr_usr;dll=ocijdbc8;protocol=thin"
    Targets="myserver" TestConnectionsOnRelease="true"
    TestConnectionsOnReserve="true" TestTableName="dual"
    URL="jdbc:oracle:thin:@localhost:1521:ssr"/>
    Maybe a problem with the properties of weblogic.codeset (I don"t set
    it) ?
    Many thanks in advance, I have no idea even if I suspect the java
    store to file or the UTL_file extration to file steps to be in cause !
    Run-O

    Run-O wrote:
    Hi,
    I'm encountering the following problem.Hi. The first thing I'd do to narrow the search is to see if my Java code
    worked in a standalone program, without weblogic in the picture. Once
    you get Oracle's JDBC driver to work with Oracle's DBMS, it shouldn't
    be hard to get the same stuff to work inside weblogic, or find out why it
    doesn't.
    Joe
    >
    >
    I'm working on a 3-tiers architecture with an Oracle Database (8.1.7)
    a Weblogic application server 6.1 SP4 and a Web server under Aix 4.3
    (all 3 are under AIX 4.3 on the same platform).
    My application has a web interface that allows users to upload files
    to the server from their PC clients and a webbrowser, that insert each
    file into a oracle Clob (via Java Code) and that call a stored
    procedure (with java code again) to extract this clob to a file
    (UTL_file package), then, the extracted file is processed line by line
    and information inserted in others tables.
    The issue is that some characters (acute, grave accent ....etc)
    appears as question marks in the database or that some date from the
    file can't be processed because they are structured as DD/MM/YYYY
    (french notation, but that normal I'm from france).
    I first thougt of an NLS_LANG problem but on the weblogic server it is
    set to french_france.WE88859P15 that seems to be right and the same
    the database configuration.
    I then tried to performs some conversion when the clob data were
    extracted to file (from WE88859P15 TO CP152 or vice-versa) with the
    oracle convert function but it seems that it doesn't work.
    An other but coherent symptoms is that the extracted files (from the
    clob columns) seems not to be fine (accent are not recognized).
    This is the java code used to load file to clob (on the weblogic
    server side)
    con = dbHandle.getAdminConnection();
    con.setAutoCommit(false);
    /// NEW IMPORT
    // int taskId = DBTools.getOraSeqValue("vtr.VTR_SEQ_LOG_IMPORT",
    DBTools.NEXTVAL,con);
    int taskId = DBTools.getOraSeqValue(SqlQueryDefinition.seqLogImport.toString(),
    DBTools.NEXTVAL,con);
    Debug.out.println("taskId " + taskId);
    // String cmd = "insert into vtr.vtr_log_import
    (cod_task,DTE_DEBUT,lob_imp,lob_rej,lob_log, txt_nom_fic_orig,
    txt_utilisateur) " +
    // "values ("+ taskId
    +",sysdate,empty_clob(),empty_clob(),empty_clob(), '"+file+"','"+
    ((UserBean)request.getSession().getAttribute("userbean")).getIdentifier()+"')";
    // stmt = con.createStatement();
    // stmt.executeQuery(cmd);
    // stmt.close();
    pstmt = con.prepareStatement(SqlQueryDefinition.initLigneImport.toString());
    pstmt.setInt(1,taskId);
    pstmt.setString(2,file);
    pstmt.setString(3,((UserBean)request.getSession().getAttribute("userbean")).getIdentifier());
    pstmt.executeQuery();
    pstmt.close();
    con.commit();
    // Writing CLOB
    // cmd = "SELECT cod_task,lob_imp,lob_rej,lob_log FROM
    vtr.vtr_log_import WHERE cod_task="+ taskId +" for update";
    // stmt = con.createStatement();
    // rset = stmt.executeQuery(cmd);
    pstmt = con.prepareStatement(SqlQueryDefinition.setBlobImport.toString());
    pstmt.setInt(1,taskId);
    rset = pstmt.executeQuery();
    rset.next();
    File csvFile = new File(localFile);
    System.out.println("csvFile length = " + csvFile.length());
    File unixFile = new File(localFile+".ux");
    Tools.dos2Unix(csvFile, unixFile);
    FileInputStream instream = new FileInputStream(unixFile);
    // support Weblogic
    clob = ClobComponent.factory(DBUtil.getInstance().isWebLogicPlatform());
    clob.setClob(rset,2);
    outstream = clob.getAsciiOutputStream();
    size = clob.getBufferSize();
    byte[] buffer = new byte[size];
    int length = -1;
    while ((length = instream.read(buffer)) != -1)
    outstream.write(buffer, 0, length);
    instream.close();
    outstream.close();
    rset.close();
    // stmt.close();
    pstmt.close();
    rset=null;
    // stmt = null;
    pstmt=null;
    con.commit();
    // IMPORT
    cs = con.prepareCall(SqlQueryDefinition.importStoredProc.toString());
    index = 1;
    cs.setString(index++, fullPath); // 1
    cs.setString(index++,
    ((UserBean)request.getSession().getAttribute("userbean")).getIdentifier());
    // 2
    cs.registerOutParameter(index++,java.sql.Types.VARCHAR); // 3
    cs.registerOutParameter(index++,java.sql.Types.VARCHAR); // 4
    cs.registerOutParameter(index++,java.sql.Types.NUMERIC); // 5
    cs.setInt(index++, taskId); // 6
    cs.executeQuery();
    String fichier1 = cs.getString(3);
    String fichier2 = cs.getString(4);
    int returnCode = cs.getInt(5);
    System.out.println("returnCode/fichier1/2 : " + returnCode + " & "
    + fichier1 + " & " + fichier2);
    cs.close();
    con.commit();
    This is the PL/SQL code used to unload clob to dile (on the oracle
    side)
    PROCEDURE writeToFile (id NUMBER, a_fichier VARCHAR2)
    IS
    result CLOB;
    cvl_tmp VARCHAR2 (32000);
    nvl_amount NUMBER := 250;
    nvl_pos NUMBER := 1;
    nvl_clob_length NUMBER;
    instr_pos NUMBER;
    file_handle UTL_FILE.file_type;
    BEGIN
    file_handle := UTL_FILE.FOPEN(
    substr(a_fichier, 1, instr(a_fichier, file_separator, -1,
    1)-1), -- dir
    substr(a_fichier, instr(a_fichier, file_separator, -1, 1)+1),
    -- file
    'W');
    select lob_imp
    INTO result
    from vtr_log_import
    where cod_task = id;
    --write clob to file
    nvl_clob_length := DBMS_LOB.getlength (result);
    cvl_tmp := NULL;
    nvl_amount := 250;
    nvl_pos := 1;
    LOOP
    instr_pos :=
    DBMS_LOB.INSTR (result, CHR (10), nvl_pos, 1) -
    nvl_pos;
    --DBMS_OUTPUT.PUT_LINE(nvl_pos||': Of length : '||instr_pos);
    IF nvl_pos + instr_pos > nvl_clob_length
    THEN
    instr_pos := nvl_clob_length - nvl_pos;
    DBMS_LOB.READ (
    lob_loc=> result,
    amount=> instr_pos,
    offset=> nvl_pos,
    buffer=> cvl_tmp
    EXIT;
    END IF;
    DBMS_LOB.READ (
    lob_loc=> result,
    amount=> instr_pos,
    offset=> nvl_pos,
    buffer=> cvl_tmp
    -- DBMS_OUTPUT.PUT_LINE(cvL_tmp);
    cvl_tmp := CONVERT(cvl_tmp, 'WE8MSWIN1252', 'WE8ISO8859P15');
    UTL_FILE.put_line (file_handle, cvl_tmp);
    nvl_pos := nvl_pos
    + instr_pos
    + 1;
    IF nvl_pos > nvl_clob_length
    THEN
    EXIT;
    END IF;
    END LOOP;
    UTL_FILE.fclose (file_handle);
    END writeToFile;
    I'm using the oracle thin driver but it's not set in classpath maybe a
    problem with that ?
    <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
    InitialCapacity="1" MaxCapacity="100" Name="oracleUserPool"
    Password="XXXXXXX
    Properties="user=vtr_usr;dll=ocijdbc8;protocol=thin"
    Targets="myserver" TestConnectionsOnRelease="true"
    TestConnectionsOnReserve="true" TestTableName="dual"
    URL="jdbc:oracle:thin:@localhost:1521:ssr"/>
    Maybe a problem with the properties of weblogic.codeset (I don"t set
    it) ?
    Many thanks in advance, I have no idea even if I suspect the java
    store to file or the UTL_file extration to file steps to be in cause !
    Run-O

Maybe you are looking for

  • Migrating 10g report to Bi Publisher 11g

    Hello, I have an existing 10g report from BI Publisher. I would like to upgrade to 11g. in that process, i created a folder in 11g, with catalog menu option. But when i try to upload existing datamodel from 10g, my upload fails. Can any one advise on

  • Error installing Enhanced Module

    hi everybody, I have upgraded my SGD-EE from 3.42 to 4.1. All works fine, except the enhanced module over a win 2003 server... the server is NOT 2003 sp1, and all was OK with SGD-EE 3.42. when I try to install the Enhanced Module over a windows 2003

  • How can pass the data from Command line  to  Applet?

    Hi, I am writing a chat application by using sockets. For that purpose I need to pass the parameter data from command line to Applets. Is there any method to receive command line args data in Applets? If so please tell me.

  • Some fonts aren't syncing from typekit

    I am using illustrator and photoshop with typekit fonts. I chose a number of fonts to sync but not all are availablle in PS & Ai. As an example I have all of the Aktiv Grotesque fonts selected to sync, but in PS & Ai only have hairline italic, medium

  • Oracle calendar LDAP integration

    We would like to integrate calendar with LDAP server. Is there any API available for this. If yes what are the authorization options available and does the API give access to administrative duties?