Best Practices for JDBC with RAC Oracle Thin Driver non-XA Weblogic

Does anyone have experience on setting up your application server's JDBC connection pools for RAC?
We are using Weblogic 10 which supports load-balancing between connection pools at the application level, but I believe the listener is load-balanced on the Oracle CRS side as well.
Is this recommended / not-recommended?

We use JDBC connection pool. However the load balance
configured at the database. JDBC uses load_balance=on
in the connection string. It works fine. I haven't
tried integrating FAN option and app server.
You may refer this doc on load balance.
http://www.oracle.com/technology/products/database/clu
stering/pdf/twpracwkldmgmt.pdf
AshokWhat type of application server are you using?

Similar Messages

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • Web Logic 6.1 JDBC Pooling with the Oracle Thin Driver

    Hi,
    We're slowly getting through a WebLogic 4.5.1 to 6.1 conversion for a small application
    at my company. Initially, we had trouble getting the Oracle thin driver working.
    We've finally cracked this nut. Now we're trying to get pooling working, and
    cannot find an example of doing this with the Oracle thin driver.
    Would anyone be able to provide:
    1. the portion of the config.xml that pertains to building a connection pool using
    the Oracle thin driver
    2. the two lines of code for loading the driver and getting a connection from
    this pool
    i.e.
    Driver d = (Driver)Class.forName("weblogic.jdbc.pool.Driver").newInstance();
    Connection c = d.connect("jdbc:weblogic:pool:myPool", null);
    Thanks very much!
    Jeff Ryan
    The Hartford

    "Jeff Ryan" <[email protected]> wrote:
    >
    1. the portion of the config.xml that pertains to building a connection
    pool using
    the Oracle thin driver
    You don't usually edit the config.xml unless it is absolutely necessar. Instead,
    you can config the pool in the GUI admin console, quite easily. After you successfully
    configed the pool in the console, it is written into the config.xml file for you.
    Only thing special when config the pool is the name of the driver and the url.
    url: jdbc:oracle:thin@dbhost:dbport:dbinstance
    driver: oracle.jdbc.driver.OracleDriver
    If you prefer, you can also config the pool from the command line. It is detailed
    in the "Weblogic Server Command-line Interface Reference".
    2. the two lines of code for loading the driver and getting a connection
    from
    this pool
    i.e.
    Driver d = (Driver)Class.forName("weblogic.jdbc.pool.Driver").newInstance();
    Connection c = d.connect("jdbc:weblogic:pool:myPool", null);
    Again, you don't use the connection directly. Instead, you use "Data Source".
    Data source in weblogic is just a connection object factory associated with a
    JNDI name. You can therefore lookup the data source with the JNDI name and from
    the data source object, you get the connectin object. No need to explicitly load
    the driver class.
    There are plenty of example codes available in the weblogic examples installed
    with 6.1, that uses the data source.
    Charles

  • Best Practices for BI, ADF and Oracle Forms installations on Weblogic

    Hi, I'm researching options on upgrading to Oracle 11g Middleware. My company currently has Oracle Forms 10g running on Oracle Application Server.
    We are interested in using Oracle Forms 11g, ADF and Jdeveloper, and Business Intelligence with Oracle's Weblogic 10.3.5.
    Is there any whitepapers or documentation on best practices for installing alll of these components together?
    For instance, can ADF ( with JSF 2.x ) be installed in the same domain as Oracle Forms 11g but use different managed servers?
    Will Business Intelligence need to be in a seperate Oracle Home with it's own weblogic installation? I spend a lot of time trying to get the JSF upgraded to 2.x in the Business Intelligence installation and could not get it to work.
    I know it's a pretty broad question but thank you for any direction on this.

    Thanx for the reply! I read through the documents and they are very good at explaining how to install the different components individually. I still can't find much on installing them together. I hope it's not just going to be a trial and error thing.
    So far I've installed done the following successfully:
    Installed 10.3.5 weblogic
    Forms and Reports 11g on top of 10.3.5
    I've created an additional managed server for our ADF applications.
    My next step is upgrading the JSF to 2.x. I would have to stage patches 12917525 and 12979653. I'm afraid it will break the forms and reports though. Any ideas?

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Jdbc connection using oracle thin driver( using jdk1.4 and oracle8 )

    hello ..
    while i was tring to connection using oracle thin driver and jdk1.4 am getting the below error message. i have set the class path for the driver also. am using oracle8 personal edition and jdk1.4.
    [java.sql.SQLException: Io exception: The Network Adapter could not establish the connection
            at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
            at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
            at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:333)
            at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:404)
            at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.ja
    va:468)
            at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:314)
            at java.sql.DriverManager.getConnection(DriverManager.java:512)
            at java.sql.DriverManager.getConnection(DriverManager.java:171)
            at Connexa.main(Connexa.java:18)[/b]
    Press any key to continue...
    my program is....
    import java.sql.*;                                          
    import java.io.*;
    import java.util.*;
    import oracle.jdbc.driver.*;
    // needed for new BFILE class
    import oracle.sql.*;
       public class Connexa {
      public static void main (String args []) throws Exception {
            Statement stmt=null; 
      try{
        // Load the Oracle JDBC driver                            
        //DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
    Class.forName("oracle.jdbc.driver.OracleDriver");
        // Connect to the database
        // You can put a database name after the @ sign in the connection URL.
    Connection conn =
    DriverManager.getConnection("jdbc:oracle:thin:@127.0.0.1:1521:orcl","scott","tiger");
        //Connection conn =
        //  DriverManager.getConnection ("jdbc:odbc:datasource", "system", "manager");
        stmt = conn.createStatement ();
    catch (SQLException e)
          e.printStackTrace();

    The code itself is fine; the problem is with one of:
    1) the connection URL
    2) intermediate networking
    3) the database itself
    1) your connection URL is "jdbc:oracle:thin:@127.0.0.1:1521:orcl"
    - is Oracle really running on the default port, 1521
    - is the installation SID really "orcl"
    2) lots of possibilities, but only a couple are likely
    - is TCP/IP configured and running on your host
    - is there a persoanl firewall rpduct running? perhaps it's blocking the connection
    3) Is Oracle running?
    Is the listener running?

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best practice for making changes to Oracle apps business views and BAs/fold

    HI
    The oracle BI solution comes with pre-defined Business Views- database views and Business Areas and folders. If we want to customize those database views or BAs and folders what will be the best practice in order to avoid losing it during any upgrades.
    For ex Oracle out-of box Order Management BA that we are using heavily needs some additional fields to be added to Order Header and Order Lines folders and we also want to add some custom folders to this BA.
    If we do the changes to the database views behind this BA would they be lost during the upgrade or do we have to copy(duplicate) those views, updated them and create a custom BA and folders against those views.
    Thanks

    Hi,
    If you are adding new folders then just add them to the Oracle Business Area. The business area is just a collection of folders. If the business area was changed in an upgrade the new folder would not be deleted.
    If you want to add fields to the existing folders/views then you have 2 options. Add the field to the defining base view (these are the views beginning OEBV and OEFV) and then regenerate the business views. This may be overwritten if the view is upgrade but this is unlikely.
    Alternatively, copy the view to create a new version and then map the old folder to the new view and refresh. You may need to re-map the folder if the folder is upgraded, but at least you have a single folder used by both Oracle and custom reports.
    Rod West

  • Best practice for developing with CRM 2013 (On Premises)

    Hello all.  I'm just starting to work with CRM, and I have some questions that hopefully will be simple for the seasoned developers.  It's mostly just some best practice or general how-to questions for the group.
    - When creating a new Visual Studio CRM Project I can connect to my CRM Instance and create new WebResources which deploy to the CRM instance just fine, but how can I pull all the existing items that are in the CRM Solution into the Visual Studio CRM project?
     Or do I need to export the solution to a ZIP, expand it with SolutionPackager.exe, then copy these into my Visual Studio project to get it into sync?
    - When multiple developers are working on changes is it best to keep everything in a Visual Studio project as I mentioned above, or is it better for everyone to have their own instance of CRM to code with so they can Export/Import solutions as needed then
    these solutions be manually merged before moving into a common Test/QA environment?
    - When modifying the submenu on a CRM form is it suggested to use Ribbon Workbench or is it better/easier to just export the solution, expand it with SolutionPackager.exe,  modify ribbondiff and anything else required for the change, package it
    back up, then reimport to CRM?  I've heard from some that Ribbon Workbench has some limitations, but being green I wasn't sure what those limitations might be or if it'd be best to just manually make these changes.  Or is thre any way to have a copy
    of ribbondiff in Visual Studio and deploy this without having to repackage the Solution and Import in the ZIP?
    I think that's it for now :)  Thanks for any advise or suggestions.  I really want to start learning the in's and out's of CRM and how all the pieces fit together.  Also can someone direct me to some documentation or books that might give
    more insight on developing for CRM 2013 or 2015 (moving to this soon)?
    Thanks for your time.

    Hi Sam
    Also interested in best practice around this area - especially recommended development routes, unit testing, continuous integration etc - it would be great if you posted here if you find any good articles etc. At the moment we tend to just push changes
    onto a live system as and when appropriate and I'd prefer to move away from that...
    Thanks
    Stuart

  • What is best practice for integration with freight forwarders?

    Hello,
    We are looking into the possibilities for automatically exchanging data with one of our freight forwarders. We will send them our shipment information and they will send back shipment status and date information including some additional information like the house bill. Sending the shipment data from our SAP (ECC 6) system is no issue, we have done that before. What is new to us is receiving back the status updates from the forwarder. Is there a kind of best practice of where to store this information on the shipment (or in a separate tabel) and what standard function module or BADI to use for this?
    We are using ECC 6.0 sales and distribution, but no transportation management or SCM modules.
    Would, like to hear the experiences of people who have done this type of intergration with their forwarders.
    Regards,
    Ed

    SAP have added SAP TM 8.10 as a separate package which is also integrated with R/3 which means, a separate server is required if SAP TM needs to be implemented which will take care of your expectations.  For more information on this, search in Google so that you will get couple of documentations on this topic.
    G. Lakshmipathi

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Best practices for working with large placed bitmap images?

    Hey all,
    I need some advice on the best way to approach building these files. I've been working on some banners that are very large: 3 x 7 feet.
    Each banner has a simple vector graphic treatment at the top and bottom (rectangle with a different colored rule on top, and vector logo) and a small amount of text, just a URL and a headline.The headline is type (not converted to outlines) and usually has some other effect applied to it, say a drop shadow or outer glow. Under these graphics is a full bleed image. The placed images need to be 150ppi at actual size, so they're honking big, sometimes up to 2GB. Once the layouts are approved, they have to go to a vendor for output.
    The Illustrator docs are really large, and I've read in other threads how to combat that (PDF compatibility, raster settings). But even still, does anyone have any insight into the best way to deal with these things? The dimensions are large, and then the images are large, and it just makes for lots of looking at the spinning ball of death...
    If it were me, I'd build them in InDe, but the vector graphics need to be edited for each one, and I so don't like to do that in InDe unless forced. To me, it's still ultimately a page layout app, not a drawing app. (Old school here.)
    FYI, our machines are all MBPs with 8G ram and the latest Intel Core 2 Duo chips, 2.66 and 2.8GHz. If we keep the files local (as opposed to working on the server) it should be fairly zippy... No?
    Any advice is appreciated, thanks!

    You can get into memory trouble with very large placed pdf files. Tiffs too.
    This has to do with the preview, which contains much more information than you need for working with.
    On the other hand if you place EPSs and take care not to turn on overprint preview you can get away with huge files.
    If you do turn on overprint preview your machine will slow down a lot and the file may become totally unmanageable.
    Compare this with to InDesign where you can control the quality of the preview. A hi-res preview will slow you down and most often you don't need it anyway.
    I was working (in Illie) the other day on much larger files than you mention – displays for whole walls – and had some considerable trouble until I reverted to the old EPS format. They say it's dying but it ain't dead yet .

  • Best practices for start with itunesu

    Hi people, I from Barcelona Spain, We start deploy itunesU for our university. We have a públic and private site, the públic site work with públic site manager.
    I looking for the best way for vídeos and feeds, I try Podcast Producer 2, feeder and podcast maker.
    Podcast maker, is cool option but xml not have a “elements” for itunesU
    Feeder: is the best option, do a perfect xml for itunesu
    Podcast Producer 2, i don’t understand the diference rss feeds or atom feeds, the workflow do it a ipod version and apple tv version, in públic site manager I add rss feed and i can see a ipod version, appletv version or audio version, in atom feed only a ipod version, Why?
    I create a workflow with name of courses
    Podcast Producer no have “elements” for itunesu, (category, order, etc...)
    the name of author it’s the same of username in podcast Producer.
    Dou you need a edit xml file (UUIDNumber_offeed.cache) for add itunesu elements.
    Any poll of software for publish in itunesU?
    Sorry for my english
    Thaks a lot

    Hi people, I from Barcelona Spain, We start deploy itunesU for our university. We have a públic and private site, the públic site work with públic site manager.
    I looking for the best way for vídeos and feeds, I try Podcast Producer 2, feeder and podcast maker.
    Podcast maker, is cool option but xml not have a “elements” for itunesU
    Feeder: is the best option, do a perfect xml for itunesu
    Podcast Producer 2, i don’t understand the diference rss feeds or atom feeds, the workflow do it a ipod version and apple tv version, in públic site manager I add rss feed and i can see a ipod version, appletv version or audio version, in atom feed only a ipod version, Why?
    I create a workflow with name of courses
    Podcast Producer no have “elements” for itunesu, (category, order, etc...)
    the name of author it’s the same of username in podcast Producer.
    Dou you need a edit xml file (UUIDNumber_offeed.cache) for add itunesu elements.
    Any poll of software for publish in itunesU?
    Sorry for my english
    Thaks a lot

  • Best Practices for VidConf with external parties

    We currently use Sony Video conference units for our end units. On the backend, we use a Cisco 3515 for multipoint conferences.
    We now need to be able to do video conferences with external parties using our MCU.
    Since there are many ways of doing this, are there solutions that work better than others? Basically I need to publish a doc that says 'here is what you will need. This port open on your firewall, a public IP address that is not NATd....'
    Any help would be appreciated.

    David,
    Unfortunately, Cisco has not (currently) chosen to implement H.460.x in their H.323 infrastructure solutions, so a Cisco GK could not be used in combination with a Tandberg SBC (session border controller). However, we currently use Cisco gatekeepers for everything but firewall traversal. For traversal, the Tandberg GK functions as the inside portion of the traversal solution and proxies for all my codecs (a mix of six different Tandberg and Polycom product lines)--even those that are H.460-capable. Combined with the SBC (session border controller is a Cisco MCM (25xx router) setting on the public side of the firewall and neighbored to the session border controller.
    The "public" GK is for the entities who write custom policies in their firewalls or set their codecs on the public side of their security. If not H.460 compliant, they have to register with a GK, they can't register directly with the SBC, hence, the public GK which is simply neighbored to the SBC.
    I would encourage you to closely look at the Polycom V2IU as well. It has come a LONG way since it was introduced a couple of years ago.
    Personally, I still don't feel like the V2IU has as much flexibility nor does it implement a dialing methodology best suited for converging networks. It is an ALG (App Layer GW) that has been tweaked to support H.323 traversal, so I don't think it will ever truly match the Tandberg Expressway solution apples-apples, but it is dramatically less expensive and thus worth considering.
    We tested both the Tandberg and Polycom traversal solutions with our internal CAC (call admission and control infrastructure), which is made up of multiple Cisco GK products in a fully meshed neighboring scenario prior to purchase of a traversal solution, and the Cisco products interoperated with both traversal solutions.
    Cisco did present a solution that proposed a Layer 3 solution, but we felt it best to pursue something based within the H.323 umbrella standard.
    If you want to talk more, please email me at [email protected] w/ direct contact info and I'll be happy to assist you in any way I can.
    Greg

Maybe you are looking for