Nested regions best practice regarding inheritance of editable regions.

Have a template page, http://www.satgraphics.com/templates/main_01.dwt in which I have an area labeled  "div#main_wrapper".
When attempting to  make it an "editable" region, I received the following comment: "The selection already contains or overlaps with an editable, repeating, or optional region".
I had already made the "header" and "column_left" regions editable, but as the "column_left" is nested within the wrapper, my question would be; Would the correct process be to un-do the "column_left" editable region, make the "div#main_wrapper" editable, and be able to edit the "column_left" area as an inherited editable area?
When I did try to do so to the "main_wrapper" div, I did not get the identifier label (blue text block at top left of each area) and received the warning as mentioned previously.
Thank You...

I did attempt to specify the "main_wrapper" div, but did so after already creating the "column_left" div region as editable, (nested in the "wrapper") and that is when I received the error message.
As an aside, the site is fairly simple, so I am using a single template in which I do make content changes to header, column_left, and hopefully within the wrapper itself) for each new page, thus intentionally wanting editable region in content of "wrapper".
The "footer" and "menu" are the two areas in which I did not open up for change; the "menu" is a library item, so any changes are made from there.
So, question now becomes that of getting the "main_wrapper" div editable and how can that be done without the aforementioned message?
http://www.satgraphics.com/templates/main_01.dwt
Thank You for your help.

Similar Messages

  • Best Practices regarding AIA and CDP extensions

    Based on the guide "AD CS Step by Step Guide: Two Tier PKI Hierarchy Deployment", I'll have both
    internal and external users (with a CDP in the DMZ) so I have a few questions regarding the configuration of AIA/CDP.
    From here: http://technet.microsoft.com/en-us/library/cc780454(v=ws.10).aspx
    A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined by the certificate issuer. Since the roots certificate issuer is the root CA, there is no value in including a CRL distribution point for
    the root CA. In addition, some applications may detect an invalid certificate chain if the root certificate has a CRL distribution point extension set.A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined
    by the certificate issuer. 
    To have an empty CDP do I have to add these lines to the CAPolicy.inf of the Offline Root CA:
    [CRLDistributionPoint]
    Empty = true
    What about the AIA? Should it be empty for the root CA?
    Using only HTTP CDPs seems to be the best practice, but what about the AIA? Should I only use HTTP?
    Since I'll be using only HTTP CDPs, should I use LDAP Publishing? What is the benefit of using it and what is the best practice regarding this?
    If I don't want to use LDAP Publishing, should I omit the commands: certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA / certutil -f -dspublish "A:\Fabrikam Root
    CA.crl" CA01
    Thank you,

    Is there any reason why you specified a '2' for the HTTP CDP ("2:http://pki.fabrikam.com/CertEnroll/%1_%3%4.crt"
    )? This will be my only CDP/AIA extension, so isn't it supposed to be '1' in priority?
    I tested the setup of the offline Root CA but after the installation, the AIA/CDP Extensions were already pre-populated with the default URLs. I removed all of them:
    The Root Certificate and CRL were already created after ADCS installation in C:\Windows\System32\CertSrv\CertEnroll\ with the default naming convention including the server name (%1_%3%4.crt).
    I guess I could renamed it without impact? If someday I have to revoke the Root CA certificate or the certificate has expired, how will I update the Root CRL since I have no CDP?
    Based on this guide: http://social.technet.microsoft.com/wiki/contents/articles/15037.ad-cs-step-by-step-guide-two-tier-pki-hierarchy-deployment.aspx,
    the Root certificate and CRL is publish in Active Directory:
    certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA
    certutil -f -dspublish "A:\Fabrikam Root CA.crl" CA01
    Is it really necessary to publish the Root CRL in my case?
    Instead of using dspublish, isn't it better to deploy the certificates (Root/Intermediate) through GPO, like in the Default Domain Policy?

  • Best Practices regarding program RCOCB004

    Dear Colleagues
    I'd like to discuss the Best Practices regarding the setup of jobs to send Process Messages
    In my company we have a batch job with two steps. Each steps contain one variant of program RCOCB004.
    The first step will send messages with Status "To be sent", "To be resubmitted" and "To be resubm. w/warng"
    The second step will send messages with Status "Destination Error", "Terminated", "Incomplete"
    However, this job sometimes fails with error "Preceding job not yet completed (plant US07)"
    I'd like to discuss what is best way to set up this job in order to avoid this error and also improve performance.
    Thanks and Regards

    Dear,
    To keep the number of message logs in the system low, proceed as follows:
          1. Check the report variants for report RCOCB004 used in your send jobs.The sending of messages in status "Destination error" or "Terminated" is only useful if the error is corrected without manual intervention;for example with messages of category PI_PHST, if the sequence of the messages or time events was swapped in the first send process.
          2. Regularly delete the logs of the messages that were not sent to the destination PI01 using report RCOCB009 (Transaction CO62).
          3. Check whether it it is actually required to send messages to the destination PI01.This is only useful if you want to evaluate the data of these messages by means of the process data evaluation, or if the message data including the logs are to be a part of the process data documentation or the batch log. Remove destination PI01 for the message categories to which the above-mentioned criteria does not apply.You can activate destination PI01 again at a later stage.
          4. If you still want to send process messages to destination PI01, carry out a regular archiving of your process orders.As a result of the archiving, the message copies and logs in the process message record are also deleted.
          5. If the described measures do not suffice, you can delete the logs using Transaction SLG2.
    Control recipe send = RCOCB006 and you need to set the job to run after event SAP_NEW_CONTROL_RECIPES
    Process message send = RCOCB002 (cross plant) and RCOCB004 (specific plant). You need to create variants for these.
    Check the IMG documentation in PPPI for control recipes and process instructions where there is more information about this. Also standard SAP help is quite good on these points.
    Finally, if you are automatically generating process instructions then you need program RCOCRPVG plus appropriate variants.
    Hope it will help you.
        Regards,
    R.Brahmankar

  • Best Practice Regarding Large Mobility Groups

    I was reading the WLC Best Practices and was wondering if anyone could put a number to this statement regarding the largest number of APs, end users, and controllers which can contained in a Mobility Group.
    We would be deploying WiSMs in two geographically dispersed data centers. No voice is being used or is planned.
    "Do not create unnecessarily large mobility groups. A mobility group should only have all controllers that have access points in the area where a client can physically roam, for example all controllers with access points in a building. If you have a scenario where several buildings are separated, they should be broken into several mobility groups. This saves memory and CPU, as controllers do not need to keep large lists of valid clients, rogues and access points inside the group, which would not interact anyway.
    Keep in mind that WLC redundancy is achieved through the mobility groups. So it might be necessary in some situations to increase the mobility group size, including additional controllers for
    redundancy (N+1 topology for example)."
    I would be interested in hearing about scenarios where a Catalyst 6509 with 5 WiSM blades is deployed in data centers which back each other up for cases of disaster recovery.
    Can I have one large Mobility group? This would be easier to manage.
    or
    Would it be better to back up each blade with a blade in the second data center? This would call for smaller Mobility Groups.
    Be glad to elaborate further if anyone has a similar experience and needs more information.
    All responses will be rated.
    Thanks in advance.
    Paul

    Well, that is a large group indeed, and I would say most organizations use nested groups instead of adding these behemoths to the directory as they are quite difficult to work with.  If it's a one-time thing, you could create it manually in bite-sized
    chunks with LDIF or the like, so that FIM only has to do small delta changes afterwards.
    The 5,000 member limit mostly applies to groups prior to the change to linked value storage.  What is your forest functional level, and have you verified that this group is using linked values?
    Steve Kradel, Zetetic LLC

  • Looking for some best practice regarding Content Administrator access

    Hi. I am looking for some best practice or rule of thumb from SAP or from different companies how they address Portal Content Administrator access in Production environment. Basically, our company is implementing portal to work with SAP BW.  We are on SP 9. Basically, I am trying to determine if we should have 1-2 Portal Content Administrator in Production with 24/7 access or we should limit them from NOT having this.  Can you share with me some ideas of what is right? and what is not?
    Should we have access in Production? Or Should we have this access but limited? By the way, our users are allow to Publish BI reports/queries into Production.

    Hello Michael,
    Refer to this guide about managing initial content in portal.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/00bfbf7c-7aa1-2910-6b9e-94f4b1d320e1
    Regards
    Deb
    [Reward Points for helpful answers]

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • Best Practices for Professional video editing

    Hi
    I'd like to know your thoughts on what the most proffessional / effeciant method for editing are. At the moment, I archive all the footage from a DV tape through iMovie (I just find iMovie easier for doing this) save / archive all the imported segments of clips I need, name them, then import them into FCP
    When I finish an edit I export and uncompressed Quicktime movie, then back up the entire project on an external drive
    Is this good practise, Should I export the final edit to tape?
    I've just started out as a video-maker as a paid proffession and I'd like to know the most 'by the book' methods
    THanks
    G5 Dual   Mac OS X (10.4.8)  

    Sounds to me that you're doing a whole lot of extra steps using i-movie as your import. You're going to lose some of FCP best media features by not digitizing with FCP. Batch Capture in FCP isn't hard to learn.
    I wouldn't say there's any "rulebook" for professional editors. We all work a little differently but here are some of my "best practices"
    Always clearly name and label all of the tapes that you are using in a fashion that makes sense to you. When I cut a large project I may have multiple tapes. If I lose a piece of media accidentally, it's easier to go back and re-digitize if I have organized the project early in.
    Clearly label bins and use them wisely. For example, on a small project I might have a "video" bin, a "music" bin and a "graphics" bin. This saves searching through one large bin.
    On larger projects, I try to think ahead to how I will edit and make bins accordingly. For example I might have bins as follows, interviews, b-roll location a, b-roll location b and so on. Then I'll have music bins, animation bins and still graphic bins. I generally try to save all to one hard drive which saves me looking through three or four drives. This isn't always possible depending upon the size of the project.
    As for back-up. Lots of peope buy harddrives for each project and then store them until they need them next. Of course, keep all of your raw-footage and you can always re-digitize.
    When I'm done with a project I save the completed project to tape...this is for dubs and library. I save the FCP information on a DVD and I burn the media from the drive, because I can't afford multiple hard drives. I would rather re-digitize my raw if I need to re-do the project in the future.
    That's how I do it, but other editors have other methods. I would highly suggest digitizing in FCP and not i-movie, but that's entirely up to you. You're not doing anything "wrong."
    G4 Dual Processor   Mac OS X (10.4.1)  
    G4 Dual Processor   Mac OS X (10.4.1)  

  • Best practice regarding package-private or public classes

    Hello,
    If I was, for example, developing a library that client code would use and rely on, then I can see how I would design the library as a "module" contained in its own package,
    and I would certainly want to think carefully about what classes to expose to outside packages (using "public" as the class access modifier), as such classes would represent the
    exposed API. Any classes that are not part of the API would be made package-private (no access modifier). The package in which my library resides would thereby create an
    additional layer of encapsulation.
    However, thus far I've only developed small applications that reside in their own packages. There does not exist any "client code" in other packages that relies on the code I've
    written. In such a case, what is the best practice when I choose to make my classes public or package-private? Is it relevant?
    Thanks in advance!

    Jujubi wrote:
    ...However, thus far I've only developed small applications that reside in their own packages. There does not exist any "client code" in other packages that relies on the code I've
    written. In such a case, what is the best practice when I choose to make my classes public or package-private? Is it relevant?I've always gone by this rule of thumb: Do I want others using or is it appropriate for others to use my methodes. Are my methods "pure" and not containing package speicific coding. Can I guarentee that everything will be initialized correctly if the package is included in other projects.
    Basically--If I can be sure that the code will do what it is supposed to do and I've not "corrupted" the obvious meaning of the method, then I usually make it public--otherwise, the outside world, other packages, does not need to see it.

  • Best Practice Regarding Maintaining Business Views/List of Values

    Hello all,
    I'm still in the learning process of using BOXI to run our Crystal Reports.  I was never familiar with the BO environment before but I have recently learned that every dynamic parameter we create for a report, the Business View/Data Connectors/LOV are created on the Enterprise Repository the moment the Crystal Report is uploaded.
    All of our reports are authored from a SQL Command statement and often times, various reports will use the same field name from the database for different reports.  For example, we have several reports that use the field name "LOCATION" that exists on a good number of tables on the database.
    When looking at the Repository, I've noticed there are several variations of LOCATION, all which I'm assuming belongs to one specific report.  Having said that, I see that it can start to become a nightmare in trying to figure out which variation of LOCATION belongs to what report.  Sooner or later, the Repository will need to be maintained a bit cleaner, and with the rate we author reports, I forsee a huge amount of headache down the road.
    With that being said, what's the best practice in a nutshell when trying to maintain these repository items?  Is it done indirectly on the Crystal Report authoring side where you name your parameter field identifiable to a specific report?  Or is it done directly on the Repository side?
    Thank you.

    Eric, you'll get a faster qualified response if you post to the  Business Objects Enterprise Administration forum as that forum is monitored by qualified support for BOE

  • Best Practice regarding a Re-classification

    Hi
    We are on 4.7. An amount had been transferred from Trade Debors to Trade Debtors Retention (Special GL) using FB01 (Cr. customer A & Dr. Customer A - with Sp GL. This caused us an issue during month-end by creating a DUMMY PC entry. This, I understand is due to the fact that, when transferring between subledgers without touching a Balance Sheet account the document is not carrying a PC entry. Therefore resulting in DUMMY entry.
    I would like to know the best practice of such reclassification so that I can ask the users to avoide such postings in furture. 
    Thanks
    Nadini

    Eric, you'll get a faster qualified response if you post to the  Business Objects Enterprise Administration forum as that forum is monitored by qualified support for BOE

  • Including page in a region : best practice

    Hello all,
    I know that this subject has been already discussed many times but my question is not really "how to do it"...but "is it the best way"?
    So the initial need was the following :
    I had to be able to include the content from a page in an application into the region of another application using a generic solution that allows end user to configure easily what page he wants to render without having to use Apex.
    For this :
    1. I first defined a plugin that allows to render an parametrized iframe into a region.
    2. I defined my plugin region on the first page that is linked with elements to define the parameters (page number to render, items and values to set on the page).
    3. Then the user can easily change the rendered page on the first application changing the database row that specify the page id.
    That is working pretty good but I would like to be sure that there is no proper solution because some customers do not really like the idea to have an application with embedded iframes.
    I considered a solution using PLSQL dynamic region, trying to retrieve the page content using an HTML post in PLSQL. But it was not really a success. After fixing access issues, I just noticed that the page was rendering but nothing was functionnal on it (buttons, process...) because of the unknown session context.
    I did not succeed to fix this.
    I also considered to use an AJAX with htmldb_Get but I do not really like this in that case. Because I want the region to be displayed on pageload. Then that does not really seem logical to make a server call to render first the original page and them refresh the region content.
    Moreover, I already used this in the past, and I also had issues about session context (processes not working and so on...)
    Do you have any other idea? Do you think the iframe solution is the best one?
    Thank in advance for your help and advices.
    Best Regards,
    Max

    Hello,
    because some customers do not really like the idea to have an application with embedded iframes.How do the customers know whether you are using iframes or not. If you apply page templates similar to "pop-up" and if you use same CSS definitions as of calling application for embedded application, then they can't make it out whether is part of application or iframe calling another application.
    Regards,
    Hari

  • Best practice regarding work flow (cutting, applying effects, exporting etc)

    Hi!
    I've been asked to shoot and edit a music video for a friend of mine, and I'm trying to figure out the best way to manage this project in PrE (in what order to do things and so on). I have a picture in my head which make sence, but I'd like to have it confirmed. If you've been following the "Very disappointed with Premiere Elements" thread, you know I'm not a fan of how the applying of effects works, when having a lot of cuts between scenes and clips etc. A few of the steps below is meant to make that process more effective.
    So, here's my idea, from the begining and in detail:
    1. Download the appropriate clips from the camera (in this case 1280x720, H.264 mov's from an EOS 500D).
    2. Create a PrE-project for each clip and maybe trim the in and outs a bit, if needed.
    3. Export each clip to uncompressed avi's.
    4. Create the main project file and import all the uncompressed avi's.
    5. Insert the clips in on appropriate tracks in the timeline.
    6. Do all the cutting, triming and sync as complete as possible, without thinking about effects.
    7. When finished, open up each of the smaller clip projects and add the desired effects. This will mainly include contrasts, color corrections, noise etc, in order to get the right look and feel to each clip/scene.
    8. Again, export the clips to uncompressed avi's and overwrite the previous versions.
    9. Open up the main project, which now should contain the clips with look-and-feel effects visible.
    10. Add some additional effects if needed.
    11. Export/share, and you're done.
    Of course I will end up going back and forth through these steps anyway, but as a basic plan it seems reasonable. I see three main positive aspects:
    1. The look-and-feel effects will be applied on the raw material, before the converting process. This should result in a slightly better quality. Perhaps not noticeable, but anyway.
    2. The main project will be more CPU friendly and easier to work with.
    3. If I want to tweek the look-and-feel effect to a clip/scene, I don't have to do it on every split (I will have a lot of splits, so applying and changing the effect parameters will be time consuming and ineffective). Of course, opening up the clip's specific project, changing the effect and then export to avi, will also take time, but point 1 and 2 makes up for that.
    Have in mind that it is a music video project, to put things in the right context. We'll probably have a few parallel stories/scenes, with lots of cutting in and out between them. The timeline will probably look insane.
    So, am I thinking in the right direction here? Any traps I might fall into along the way?
    Regards
    Fredrik

    Fredrik,
    Though similar to your workflow, here is how I would do it.
    Import those "raw" Clips into a Project, and do my Trimming in that Project, relying on the Source Monitor to establish the In & Out Points for each, and also using different Instances of any longer "master Clip.". I would also do my CC (Color Correction), and all density (Levels, etc.) Effects here. Do not Trim too closely, as you will want to make sure that you have adequate Handles to work with later on.
    Use the WAB (Work Area Bar) to Export the material that I needed in "chunks," using either Lagarith Lossless CODEC, or UT Lossless CODEC *
    Import my music into a new Project and listen over and over, making notes on what visuals (those Exported/Shared Clips from above) I have at my disposal. At this point, I would also be making notes as to some of the Effects that I felt went with the music, based on my knowledge of the available visuals.
    Import my Exported/Shared, color graded Clips.
    Assemble those Clips, and Trim even more.
    Watch and listen carefully, going back to my notes.
    Apply any additional Effects now.
    Watch and listen carefully.
    Tighten any edits, adjust any applied Effects, and perhaps add (or remove existing) more Effects.
    Watch and listen carefully.
    Output an "approval" AV for the band/client.
    Tweak, as is necessary.
    Output "final approval" AV.
    Tweak, as is necessary.
    Export/Share, to desired delivery formats.
    Invoice the client.
    Cash check.
    Declare "wine-thirty."
    This is very similar to your proposed workflow.
    Good luck,
    Hunt
    * I have used Lagarith Lossless CODEC with my PrE 4.0, but have not tried UT. Both work fine in PrPro, so I assume that UT Lossless will work in PrE too. These CODEC's are fairly quick in processing/Exporting, and offer the benefit of smaller files, than Uncompressed AVI. They are visually lossless. The resultant files will NOT be tiny, so one would still need a good amount of HDD space. Neither CODEC introduces any artifacts, or color degredation.

  • Any "Best Practice" regarding use of zfs in LDOM with zones

    I have 3 different networks and I want to create a guest-domain for each of the three networks on the same control domain.
    Inside each guest-domain, I want to create 3 zones.
    To make it easy to handle growth and also make the zones more portable, I want to create a zpool inside each guest domain and then a zfs for each zoneroot.
    By doing this I will be able to handle growth by adding vdisks to the zpool(in the guest domain) and also to migrate individual zones by using zfs send/receive.
    In the "LDoms Community Cookbook", I found a description on how to use zfs clone in the control domain to decrease deploy time of new guest domains:
    " You can use ZFS to very efficiently, easily and quickly, take a copy of a previously prepared "golden" boot disk for one domain and redeploy multiple copies of that image as a pre-installed boot disk for other domains."
    I can see clear advantages in using zfs in both the control domain and the guest domain, but what is the downside?
    I ends up with a kind of nested zfs where I create a zpool inside a zpool, the first in the control domain and the second inside a guest domain.
    How is zfs caching handled, will I end up with a solution with performance problems and a lot of I/O overhead?
    Kindest,
    Tor

    I'm not familiar with the Sybase agent code and you are correct, only 15.0.3 seems to be supported. I think we'd need a little more debug information to determine if there was a workaround. May be switching on *.info messages in syslogd.conf might get some more useful hints (no guarantee).
    Unfortunately, I can't comment on if, or when, Sybase 15.5.x might be supported.
    Regards,
    Tim
    ---

  • What is best practice for multi camera edit with differing lengths?

    I have three cameras that took video of an engagement party. Camera A and B took it all (with some early extra material exclusive to each camera). Camera C took some, then stopped, then took more.
    So I have four sets of clips - A and B which are complete, then C then D.
    Should I create sequence 1 with A, B and C synchonised, then create sequence 2 with A, B and D synchronised, then sequence 3 with sundry early non-multi camera clips, plus sequence 1 plus sequence 2 then late non-multi camera clips?
    Or can I synchronise A, B and C, then on the same timeline synchronise A, B and D? I'm concerned that the second synchronisation will put C out of sync.
    What is the best way to approach this?
    Thanks in advance.

    A and B which are complete, then C then D.
    I think you're looking at this in the wrong way.  You have only three cameras, A, B and C, but you don't really have a D camera, as those are just other clips from camera C.  You might call them C1 and C2 if you like, but calling them D seems to be confusing the issue, as it's still only three cameras, and three shots to choose from when cutting the sequence.  (Except for the gap between C clips, where you will have only the A and B shots to choose.)
    You can absolutely sync all the clips form camera C on the same sequence as A and B.  And it will probably be easier to do so.

  • Best practice for constructing an editable, saveable form?

    We're entirely new to LiveCycle here, but we're looking to create a document for one purpose: To allow a user to edit their details into the form, email/send ourselves the completed form, and (for legal reasons), leave a fully saved copy of the form with the user.
    It is imperative that the user is left with a copy of the document in its entireity. Can this be dnoe with LiveCycle? Or with the aid of Managed Services?
    Many thanks,
    Jordan

    Yes it can be done .....I assume that the user is or may be using Reader to fill th eform ...if so you will have to Reader Extend the form to allow Reader to do the local save. Or ...your process can email back a copy of the form if you do not want to Reader Extend. If you go th elocal save route the onus is on the user to hit a save button or do a File Save .....you cannot write to their local harddrive automatically (for security reasons).
    Hope that helps
    Paul

Maybe you are looking for

  • 10.2.1.2102 - Issues

    The 'contacts' area is a mess and doesn't seem to be recovering as you suggest it will, given a few days - I've never heard of anything like that, ever before.  Is this reality? I have a few contacts with pictures which should be displayed at the top

  • [SOLVED] Can't connect to internet after installation

    Hello, I just installed Arch, and during the installation I connected to wifi (obviously, since I had to download the packages) with iwconfig and all the other tools on the installer, but after the installation I wasn't connected any more. I have no

  • Time machine preference anamoly

    Why does TM say that my "latest" backup was last month, whereas in entering TM it is apparent that today's backup was successful? (I back up once a week voluntarily, rather than automatically)

  • Activating Read Out Loud degraded text in pdf

    When I activated Read Out Loud in a pdf, the text became degraded - as though it were made up of tiny dots. Deactivating Read Out Loud did not help. Now every pdf I open in Adobe Reader X has that same degraded text. How can I get my normal looking t

  • EJB3 lookup

    Hi, We are working with JDeveloper Studio 10.1.3.1.0, and we are using EJB3 session beans. We can access our session beans using annotations, and this works fine, but we have a situation in wich we need access to a generic EJB (local interface), so w