Object manipulation

I was wondering if there was a way to change the shape of a lower third(or any of the other object boxes) to anything other then a rectangle or square.

In addition to Nick's suggestion you can also manipulate those pre designed objects in the Canvas or in the Inspector.
Highlight the object on the timeline and a blue outline is seen around your object in the canvas. Int he upper left hand side is a square that you can use to change the rotation. In the upper right hand corner is a box that you can move to change the size and shape.
You will find the same opportunities in the inspector when you click the Attributes Tab. Changing Scale X or Scale Y will change the shape of your object while the Rotate boxes give you precise control of the angle.

Similar Messages

  • Suggestion: Object Manipulation Handles

    When resizing objects on the FP, objects only have resize handles on the corners. It would be helpful to have resize handles on each side (green circles) like "Classic LabVIEW" to constrain resizing to one direction. Also, consider having a rotation handle (where applicable).
    a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"] {color: black;} a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"]:after {content: '';} .jrd-sig {height: 80px; overflow: visible;} .jrd-sig-deploy {float:left; opacity:0.2;} .jrd-sig-img {float:right; opacity:0.2;} .jrd-sig-img:hover {opacity:0.8;} .jrd-sig-deploy:hover {opacity:0.8;}

    Just realized that resize handles already exists on the BD:
    a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"] {color: black;} a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"]:after {content: '';} .jrd-sig {height: 80px; overflow: visible;} .jrd-sig-deploy {float:left; opacity:0.2;} .jrd-sig-img {float:right; opacity:0.2;} .jrd-sig-img:hover {opacity:0.8;} .jrd-sig-deploy:hover {opacity:0.8;}

  • Very Basic Question on Threads and Object Manipulation between classes

    I have a feeling this is on the virge of being a stupid question but hey, its the right forum.
    Lets assume I have a class that extends Jframe : Jframe1
    In that frame there is only one Jlabel : Jlabel1
    I want to create a thread that will affect Jlabel1
    The thread will run an endless loop that will.. for example change the color of the label to a random color and then the thread will sleep for a given time. There is no use in this program. Its only meant to help me understand
    I have looked up info and examples on threads. Unfortunately none were useful. Most examples try to illustrate the use of threads with the example of an applet digital clock. But it does not help with my problem, not to mention I dont want to delve into applets at this time.
    I know I have to make a class that extends thread. Does it have to be an inner class?
    How do I get to affect the frame's Jlabel1 from it? It says it doesn't know anything about it.

    import javax.swing.*;
    import java.awt.*;
    import java.util.*;
    public class Jframe1 extends JFrame implements Runnable{
      Container con;
      JLabel Jlabel1;
      Random rand;
      Color c;
      public Jframe1(){
        setDefaultCloseOperation(EXIT_ON_CLOSE);
        con = getContentPane();
        rand = new Random();
        Jlabel1 = new JLabel("bspus", JLabel.CENTER);
        Jlabel1.setOpaque(true);
        con.add(Jlabel1, BorderLayout.NORTH);
        setSize(300, 300);
        setVisible(true);
      public void run(){
        while (true){
          try{
            Thread.sleep(1000);
          catch (InterruptedException e){
            break;
          int n = rand.nextInt(16777216);
          c = new Color(n);
          SwingUtilities.invokeLater(new Runnable(){
            public void run(){
              Jlabel1.setBackground(c);
      public static void main(String[] args){
        Jframe1 jf = new Jframe1();
        new Thread(jf).start(); // you don't need to create a new thread
      }                         // because this Main thread is just
                                // another thread.
                                // here, only for demonstration purpose,
    }                           // we make a new separate thread.

  • Indesign Scripting Object Model

    Hi,
    Is there some reference documentation that provides detailed information on the Indesign object model as exposed to the scripting interface (javascript)?
    I have done fair amount of scripting in Illustrator so I was looking for the Indesign equivalent of this document
    http://wwwimages.adobe.com/www.adobe.com/content/dam/Adobe/en/devnet/pdf/illustrator/scrip ting/illustrator_scripting_reference_javascript_cs5.pdf
    I looked at this document but it seems kind of sparse (compared to illustrator)
    http://wwwimages.adobe.com/www.adobe.com/content/dam/Adobe/en/products/indesign/pdfs/InDes ignCS5_ScriptingGuide_JS.pdf
    Obviously they are different applications but I was kind of hoping there would be more info for Indesign scripting.
    In particular I am looking for some insight on how to effectively walk or traverse the document structure of an indesign document using code.
    Finding specific things by type seems out of the picture becuase after doing object reflection on all the page items in an indesign document nothing seems to have a "type" property.
    In my illustrator scripts I was able to place stuff in illustrator layers, give arbitrary pageitems a name and then later go find and manipulate via script. Not as clear how to do this in an indesign context.
    The one advantage of this was that in illustrator I was able to use textframes to hold variable data and read later via scripts. The document becomes the database. And I could create any number of arbitrary "variables" inside the document outside the white board. WOndering how to do this kind of stuff with indesign so I can do template driven documents that are managed by scriptings and form UI that I create.

    commnet: Glad to hear you're clear on the new keyword. It's something that a lot of people are confused about.
    Also glad to hear you've read Crockford's book on this stuff. Honestly I think for most InDesign usage, it's a mistake to worry about any kind of inheritance...usually it's easier to ignore all that stuff.
    And to answer the question about number of properties, probably talking max 100 or so. Certainly not thousands. I just want to have my own datastructure embedded in the indesign document that I can easily talk to via scripts. And this data needs to persist between instances of the javascript script's lifecycle. So it seems having some XML to parse is the solution. If that makes sense.
    If there was a way to deal with JSON data "hidden" within an indesign document I would gladly work with that. But I still need to change certain common content snippets inside the flow of the indesign document in a consistent way.
    I think if you have max 100 properties, you should probably choose the methods that are clearest code, rather than most efficient. I suppose you might have noticable delay with 100, though, obviously try it and see. I'm confused but I thought you needed to have your data associated with text elements inside your document, such that it is visible to a user in the user interface. Hence the proposal of using XML tagged text inside a Text Frame. Did that requirement go away?
    If you just want to hide JSON in the document, then use document.insertLabel() of a string of JSON and be done with it.
    You can also use the JavaScript interpreter's E4X features, that give you much richer Javascript bindings for manipulating XML objects and files. Unfortunately the Javascript's XML bindings have nothing to do with the InDesign XML object model. So could, e.g., export the InDesign DOM XML into a file, read that into a Javascript XML object, manipulating with E4X functions, write it out to a file, and then re-import it into the DOM. Not really very convenient.
    I honestly don't know much about Javascript for browsers, so if you could be more concrete in your queries it would be easier to answer.
    Looking forward to your example snippet though.
    OK, so, in Jeff (absqua)'s example, you create a constructor for a changeNumber rule, you call the constructor to make a rule, add the rule to the ruleset, and change a value attribute of the rule, and then process the ruleset.
    This strikes me as...extremely goofy. For two reasons, first the ruleset creation function should be parametrized with the value you want to change, and second, that you go change the rule's attributes after you have added it to a ruleset. Also, while this works in this example, I don't think it's appropriate to have the xpath attribute hardcoded in the rule creation function -- it's much plausible that you want a function that constructs a rule that changes some arbitrary tag to some arbitrary value.
    So, anyhow, if you were going to stick with the constructor paradigm, then I would change from:
    var changeNumber = new ChangeDocumentNumber();
    var ruleSet = [changeNumber, changeDescription];
    changeNumber.value = "5678";
    __processRuleSet(doc.xmlElements[0], ruleSet);
    to:
    var changeNumber = new ChangeDocumentNumber();
    changeNumber.value = "5678";
    var ruleSet = [changeNumber, changeDescription];
    __processRuleSet(doc.xmlElements[0], ruleSet);
    But I don't like this, because it's not parametrized reasonably. I would want something like:
    var changeNumber = new ChangeValue("//DocumentNumber", "5678");
    var changeDescription = new ChangeValue("//DocumentDescription", "New descr.");
    var ruleSet = [ changeNumber, changeDescription ];
    But because of my disdain for classical inheritance in JavaScript, I would instead write it as:
    function changeValue(xpath, newValue){
      return {
        name: "ChangeValue",
        xpath: xpath,
        apply: function(Element, processor) {
          if (newValue) {
            element.contents = newValue;
          return true;
    var changeNumber = changeValue("//DocumentNumber", "5678");
    var changeDescription = changeValue("//DocumentDescription", "New descr.");
    Because I think that a lot clearer about what is going on without introducing confusion.
    Edit: removed "this.value" from the above and just used newValue directly.
    But if you're defining a lot of rules with different apply functions, this is a very cumbersome strategy. You have all this boilerplate for each function that is basically the same but the apply function is different. This is leads to a lot of wasted code duplications. Why would your apply function be different? Well, that would usually be if you are writing functions that move around XML elements within the document hierarchy, or other things that are not simply changing the value of an element.
    I would much prefer to write this as:
    var changeRule = makeRule("change",
      function(element, ruleProcessor, newValue) {
        element.contents = newValue;
    var changeNumber = changeRule("//DocumentNumber", "5678");
    var changeDescription = changeRule("//DocumentDescription", "New descr.");
    var ruleSet = [ changeNumber, changeDescription ];
    This kind of syntax makes it much more compact to create arbitrary rules with different apply functions. In retrospect, the makeRule() function is named wrong, because it returns a function that makes a rule. Perhaps it'd be more reasonable if it was called makeRuleMaker().
    Anyhow, the downside is that the makeRule() function is kind of heinous. I wrote this in March in Re: How to shift content with in cell in xml rules table. But:
    //// XML rule functions
    // Adobe's sample for how to define these is HORRIBLE.
    // Let's fix several of these problems.
    // First, a handy object to make clearer the return values
    // of XML rules:
    var XMLmm = { stopProcessing: true, continueProcessing: false};
    // Adobe suggest defining rules constructors like this:
    //   function RuleName() {
    //       this.name = "RuleNameAsString";
    //       this.xpath = "ValidXPathSpecifier";
    //       this.apply = function (element, ruleSet, ruleProcessor){
    //       //Do something here.
    //       //Return true to stop further processing of the XML element
    //       return true;
    //       }; // end of Apply function
    // And then creating a ruleset like this:
    //   var myRuleSet = new Array (new RuleName, new anotherRuleName);
    // That syntax is ugly and, and is especially bad if
    // you need to parametrize the rule parameters, which is the only
    // reasonable approach to writing reasonable rules. Such as:
    //   function addNode(xpath, parent, tag) {
    //       this.name = "addNode";
    //       this.xpath = xpath;
    //       this.apply = function (element, ruleProcessor) {
    //           parent.xmlElements.add(tag);
    //           return XMLmm.stopProcessing;
    // and then creating a ruleset like:
    //   rule = new Array (new addNode("//p", someTag));
    // So instead we introduce a makeRule function, that
    // allows us to leave behind all the crud. So then we can write:
    // addNode = makeRule("addNode",
    // function(element, ruleProcessor, parent, tag) {
    //     parent.xmlElements.add(tag);
    //     return XMLmm.stopProcessing;
    // and use:
    // rule = [ addNode("//p", someTag ];
    function makeRule(name, f) {
        return function(xpath) {
            var
                //xpath=arguments[0],
                   // "arguments" isn't a real array, but we can use
                   // Array.prototype.slice on it instead...
                   args=Array.prototype.slice.apply(arguments, [1]);
            return {
                name: name,
                xpath: xpath,
                apply: function(element, ruleProcessor) {
                        // Slice is necessary to make a copy so we don't
                        // affect future calls.
                    var moreargs = args.slice(0);
                    moreargs.splice(0, 0, element, ruleProcessor);
                    return f.apply(f,moreargs);

  • OCCI and Object Views

    Can you use OCCI with object views? Here is the problem that I am seeing:
    SCHEMA.SQL:
    CREATE TABLE EMP_TABLE
    empnumber NUMBER (5),
    job VARCHAR2 (20)
    CREATE TYPE EMPLOYEE_T as object
    empnumber NUMBER (5),
    job VARCHAR2 (20) )
    CREATE VIEW EMP_VIEW OF EMPLOYEE_T
    WITH OBJECT IDENTIFIER (empnumber) AS
    SELECT e.empnumber, e.job
    FROM EMP_TABLE e
    In the code, I try:
    env = Environment::createEnvironment(Environment::OBJECT);
    conn = env->createConnection(username, password, connection);
    RegisterMappings(env);
    Employee* e = new(conn,"EMP_VIEW")Employee(); //works
    Ref<Employee>e1=new(conn,"EMP_VIEW")Employee(); //fails
    Debugging the code, I get an access violation in the Ref constructor:
    template <class T>
    Ref<T>::Ref(const T *obj)
    rimplPtr = new RefImpl((PObject *)obj); // <==Access violation
    System specs:
    Windows 2000AS
    Oracle 9.2.0.4

    Sorry to be so long in replying, we decided to move back to objects without the views. Our DBAs were not happy, but sometimes that's how it goes. I did not want to have to go to using associative access for my object manipulation because it is not robust enough for real code, IMHO. So, I would rather do something that I can make work in the real world than have a database model that makes me write what I consider to be bad code. I do wish that OCCI would become more exception-safe as well as I wish that there were not cases where associative access worked but navigational access dod not.
    <rant>
    One observation about OCCI in general is that using associative access seems to be a bad idea in almost all real-world scenarios. Actually, most of the OCCI constructs seem to be less-than-optimally designed from a C++ perspective. Tossing raw pointers around is not the way toward robust software. And most of the constructs in OCCI do that and worse. None of the high level constructs like Environment, Connection, Statement, etc. are exception safe in any sense of the word. I have (like a lot of you, I'm sure) have had to write a layer on top of the base OCCI classes to enforce some level of exception safety. Not that its a big deal or anything, but just one more thing I would rather not have to do...
    </rant>

  • WORKAROUND! - Sluggish Motion V 4.0.1 on Mac Pro Nehalem 2.97 GHz 16 Vcore

    Hi guys, like several people on this forum and others I been very frustrated with the performance of both Motion.app version 3 and now Motion.app V4 .0 .1 (latest update) on my 2009 Mac pro Nehalem 2.97 GHz 16 Vcore 12GB with the ATI Radeon 4870 graphics card.
    I have two workarounds that may be of assistance in relieving this issue. They work for me however before I discuss what they I'd like to set the scene.
    Like many of us, I have trawled this forum and many others including COW and so on mostly relying on those 10-12 (the dozen or so) excellent tips from people like Mark Spencer and others. This includes the usual things like turning off rendering options that make heavy use of CPU/GPU etc.
    Caveat: I'd also like to note here that this performance sluggishness has absolutely nothing to do (and is completely unrelated to) I/O performance nor is it related to memory usage. The reason I say this is because during some testing and the normal practice I keep all a Motion.app cache files (autosave, cached files etc) out on a very fast PROAVIO EB8NS disk array with ATTO R380 HBA onto SAS paths where this box is capable of over 450 MB per second measured with Kona system test tool. Additionally I have 12 GB of memory on this Mac pro which I assigned 70% of it to be available to Motion.app V4. The sluggishness Motion.app V4 is not manifest itself through the usual items such as excessive virtual memory page in/out or swap in/out activity. In fact in this scenario I describe below in my system when nothing else is running the former is non-existent! Additionally I also want to know that Motion.app V4 in my scenario/situation/environment is definitely and certainly NOT impacted by insufficient I/O bandwidth..
    However, like many people, who have the same or similar setup to myself, we experience severe delays the most basic Motion.app UI such as: selecting an item, scrubbing through the timeline, performing any kind of transform etc etc. Many of us trying to diagnose this problem (looking for any kind of diagnostics or messages etc) have noticed that with the exception of one single core, the remaining cores to do very little (idle or just barely moving) whilst the remaining single core is taking out at 100% constantly.
    Many months ago whilst really bogged down with this problem trying to complete a project, I contacted Mark Spencer who politely explained that notion exploits the on-board graphics card (in my case the ATI Radeon HD 4870). After frustratingly trying to improve the performance of this and completely reinstalling Final Cut Suite in a different brand-new (at the time MacOS X 10.5.6), I found that there was little difference.
    I am mentioning all this because, I know that a few colleagues have had a similar problem and have not found any satisfaction in looking at this despite many well-informed forum users who offer excellent advice.
    At my wits end (currently), I took a few days to try and isolate what the problems may be caused by (what exacerbates this severely degraded performance). This user interface sludgy degradation is manifested through the UI also has the (spinning beach ball).
    I isolated pieces in some of the Motion.app projects and started to ISOLATE (Control-I) what I thought were quite complex layers and individual objects but to no avail. I then resorted to completely deleting them with some improvement in performance (noticeably less sluggyness) however it still was not lightning fast as I expected considering it I was in draft mode, 720 p resolution, with lighting, shadows and reflections completely turned off in the view.
    *Motion.app BEHAVIOURS*: after some hours and a great deal of trial and error (non-productive work time I might add) it occurred to me that some of the Motion.app behaviours in particular place in the layer with as few as five objects in the layer (behaviours such as "Motion.app path", "wriggle", "oscillate" and even good old trusted "throw" seem to be the culprits in this particular layer.
    I will note well here at this point, but of course "camera layers" does seem to be affected at all with the basic movements such as Dolly, frame, sweep etc. I have some sets up to 4 or five cameras which seems not to bother Motion.app at all!
    *OBJECT SELECTION (S):* yes this also seems to have some degrading impact on playout, scrubbing and so on. Interesting I thought.
    Workarounds: here are a few things that I know that work to get around some problems of working where Motion.app is extremely SLUGGY in the user interface (UI). Perhaps someone has already noted these on a forum or posted on however I couldn't find them.
    So my workflow/practice here is to do the following when I'm manipulating objects (placement, set up, applying filters, adjusting timeline, and so on):
    +WORKAROUND #1+: *DISABLE the behaviours* in the layers where you are performing any object manipulation. In fact we have many objects in many layers where the topmost layer (a group of nested layers) contains any behaviours, I would suggest you disable those whilst you work on that particular layer.
    It's a very simple stylus click to enable the BEHAVIOURS layers that you have been working on to see what the effect of those behaviours when you want to play them out in the player/canvas.
    I have no idea why such trivial behaviours cause my very expensive Mac pro to nearly lock up whilst in Motion.app version 4 and also version 3. Anyway is definitely works on me.
    +WORKAROUND #2+: *DESELECT ANY ITEM IN THE CANVAS* (in any view) when attempting to play out whilst in the canvas. Again I don't know why having one or more items (including the camera) insert would have such a drastic impact on the performance of Motion.app to play out what I think is a frivolous range, however this really works for me.
    *How to measure the performance or how to monitor the performance/usage of the ATI Radeon HD 4870 graphics card from MacOS X?* I would like to know exactly what this is doing on my system when one of the CPU cores is pegged out in 100% servicing requests from Motion.app. I can't find utility (which am willing to pay for) that sees the ATI Radeon HD 4870 graphics card. One I thought would have is an application called atMonitor.app (http://www.atpurpose.com/atMonitor/) which seems to do a pretty good job on my uni body MacBook Pro and wife's Macbook Air.
    It would seem that any of the utilities don't seem to see the ATI Radeon HD 4870 or if they do and they initiate commands to it that the ATI Radeon HD 4870 and doesn't respond or send any replies status information. (Beats me I'm not a technical person these days)
    Summary: I would be very interested if other people who have a similar if not pretty near exact setup as myself have found these two suggestions useful. I have tried some of these Motion.app projects on my MacBook Pro uni-body and with some difference the MacBook Pro exhibits similar sluggyness to this Mac pro.
    W
    Hong Kong

    Mark, I took your advice and reinstall Final Cut Suite Version 3 (including Motion.app version 4.0.1) onto a completely different and I'm used file system including a brand-new install of MacOS X snow leopard 10.6.1, which I believe is the most recent update.
    To be clear, I have all the latest Pro Applications updates as well apply to this brand-new system.
    So here is the *bad new*s.
    *Using exactly the same hardware configuration as I described in this theme of woe, I launched Motion.app V4 .0 .1 and opened up several of the projects that I'm having trouble with that is issued.*
    Simply, the symptoms and observation I described in this thread exists with a brand-new installation.
    Anyway looks like I have a backup/another instance available Final Cut Pro in case my main production image has trouble.
    Once again, as I described initially in this thread, I used the two workarounds to overcome the consistent and overwhelmingly an exaggerated 20 seconds lag on average interactions with Motion.app with the success that I'd described previously.
    In another thread back in September 2009, a forum poster mentioned that he felt that the camera framing behaviours were the root of this problem. And to this end he advised converting these behaviours to keyframes using the command-K ("convert to keyframes") to overcome his misery.
    This indeed does work very well. So this is the third workaround and I'm starting to use.
    Further I have done some more hard comparisons with my 2009 MacBook pro uni-body 2.93 GHz with a standard arrangement of 4 MB of memory and the usual disk internally. *To my great surprise*, I found that the projects that I'm having a lot of trouble with were much more responsive not only with the camera framing behaviours (which I'm not particularly having any trouble with), but the layers that contains a simple transform behaviours such as "motion path", "wiggle", and so on did not seem to bother the Mac book pro.
    I watched the CPU cores on the Mac book pro maybe go to 80% while the project was loading, and also whilst I gingerly navigated through the project although feeling a little sluggy was far less in orders of 10 to 20 seconds faster than this Mac pro Nehalem with all the bells and whistles on my earnings embedded within it.
    Now I'm starting to wonder, as Andy pointed out, +maybe there is some trouble with the hardware in this system+. I really could not know how to determine if this is the case. However before I draw this conclusion finally, I noticed that there are a lot more people out there with the same consideration as I have that have this same problem. It's not narrowed down to those owners with the ATI Radeon HD 4870 graphics card, however these seem to be the bulk of the people who are trying to do things that are getting a fair amount of forum time (like me).
    There certainly seems to be something of an issue as there are some symptom dumps for Motion.app but only one or 2 to over this period that seemed to be triggered by something on this machine. And like many on this forum I gave up reading dumps for a living back in the 1990s.
    I would really like to know how I can diagnose this problem before I had to pack this Mac pro up and carted back to the local Apple care centre here in Hong Kong (which means I won't have it forever while, which means no income). Yes I can use my Mac book pro uni-body which is I might end up doing.
    _Oh, and this little gem._ I find that if I invoke the "timeline pane/window" (command-7), that it can take up to 2 seconds to refresh the screen. It is amazingly slow and feel like there is definitely something wrong.
    If I have a simple motion project by that I mean some simple plates some moving text and no real 3-D except for one little camera Motion.app V 4 seems to work quite okay.
    The sets that I'm working in at about eight or nine cameras over the timeline and maybe three or four sets in the project. This is not the work I think especially when most of the moving here is a few objects wobbling around in 3-D space and quick camera moves using the camera framing behaviour.
    In summary, having spent what seems to be 40 or 50 hours messing with this issue are not doing anything productive, I've come to the conclusion that there is definitely something wrong with this setup that I have. However having had no problem with that up until now (I know actually I had some problems back in May 2009 with it but it seemed that install a Final Cut Suite version 2 completely fixed it.
    And I am totally surprised that my Mac book pro uni-body if pressed can do the job that I want.
    Therefore Motion Forum users, I really don't know what else there is to do other than to poke around over time and eventually something will fix this because it is really very frustrating and like many of us on this forum I did endeavour to pay for/put capital funds into a system highly recommended by many that uses Apple production applications than I expected it would work.
    By the way, I noticed when I use Nuke version 5 and Shake V 4.1 and even Maya PE, I don't have these issues but will concede that I doubt whether the use applications take advantage directly of the GPU arrangement in the graphics card like Motion.app does.
    That's all I have.
    Any other thoughts are most welcome.
    W

  • EJB 3.0 / Data control binding / ADF Faces

    We have a great and very useful tool to generate rich client web application. Generate web component from a data structure is easy but manage the full life cycle is a little bit more complex. This complexity is dramatically increased when tool/platform are offering many models to solve the problem.
    In JDev we have three major choices:
    1) adf business components
    2) top link pojo
    3) ejb3.0
    From the client side and in my point of view (at my current understanding) the first and second choice are the easiest way to manage the persistence because they are what i say "bidirectional", that mean i will use the same data structure to persist and synchronize the database. I may only calll a method from the object to put my changes from the midle-tier into the database.
    EJB 3.0 seem to be more cormplex due essentially to the presence of the session bean that acts like conversational api. I like this architecture because it is the most flexible but from my client i have to manage these api's instead of one data structure. This mean for me to catch/detect every updates, new insert or deletes in my business object and to write the corresponding code to apply to correct entity bean methods from my session bean.
    Why not manage this logic in the data control framework by declaring which methods are to use as finder to get the data and which methods are to used to manage the persistence (persist, merge, remove) at object level.

    Exactly what i mean
    Actually Data Control architecture seem to be business object oriented with native method called from the same object to retrieve (the most part done)/synchronize the data with the database.
    This is fine than your architectue don't provide a service layer like session facade.
    With session facade you will access different business method to retrieve data and synchronize you data with the database not different method from the same object like application module do in adf bc.
    I see two solution, the best is that you extend the data control framework to offer standard crud operation and a tools to map these with one or many methods in the same (has to be the same?this is a question of architecture, i think it has to be the same but the framework may have to permit to use an another one for flexibility) session bean. This may be the same from adf bc or top link but the method may be selected automatically by the tool (wizard).
    The second solution is to build specialized sesssion bean with mandatory standardized methods and map this directly in the framework. The advantage i see is that the updates may be collected only once by the session bean by passing a specialized object manipulated by a specialized method.
    In fact the best solution is something that combines mapping in the data control and a specialized set of methods in the session bean to execute a transaction in a whole at the server level.
    What is to be decided if the framework (data control part) has to maintain a status for attribute (or set of attribute that correspond to a data control node) that are new inserted, updated or deleted and to synchronize only the changes.
    I'm very surprising that something like that it is not already in the framework ?
    How do I have to take care of changes made in a data control node object when i have used it to generate an adf faces by example in master / detail pages with an edit form like in otn examples. I see that the detail list in the detail table is updated so the data control is updated correctly by the edit form when i submit the changes but how to finally get this changes and synchronize with the database ?
    ps i reply previously by mistake by email sorry ... not that the email is not exactly the same as the post here
    Michel

  • Data at all level do not match after applying security

    Hi ,
    We are implementing the security and observed following.
    1. Data is loaded in the cube correctly and report shows up the data correct at all level.
    2. Now we apply the security which which will restricts the users to see some members according to the role they are mapped to.
    3. When we create a report we are seeing the values at second and all below level to be correct but the value at all level still shows same as got in step 1.This means that the value is not dynamically aggregated while creating the report.
    Also we checked that the values are not precomputed at all level for any dimension.
    Any pointers to resolve this?
    Thanks in advance.
    Thanks
    Brijesh

    John, A sureshot way to simulate the relational aggregation for various users (who have vpd) applied on the fact information is to create the AW for each user. That way the scoping of dimension hierarchies and/or facts occur on the relational source views and each user only sees the summation of the visible values in the AW. You can use (local) views in each user's schema on the appropriate AW and make the application access seamless across all the user schemas. Such a solution may be a bit redundant (leaf data present in multiple AWs increasing load time) but should work in all environments since it does not involve any tweaking of internal objects via custom olap dml.
    +++++++++++++
    Regd implementing the approach to have a single AW servicing multiple users but also allow for individual users to see only their data (for both base and summary data).. This can be done in 10gR2. We have used this approach with a 10gR2 AW based on suggestions from people who were in the know :)
    Please note the disclaimers/caveats..
    * Works for 10gR2 with no foreseeable way to port this onto 11g when olap dml object manipulation is prevented by the lock definition (lockdfn) keyword.
    * Custom code needs to be run at startup.. Preferably in PERMIT_READ program alone since this way any changes made by any of the restricted user(s) do not get saved/commited. This is the manner that sql query on AWM using vpd (say) would work.
    * OLAP DML Code is very dependent on the nature of the cube structure and stored level setting.
    * This approach provides for a neat and nifty solution in the context of a PoC/demo but the solution performs a (possibly exhaustive) cleanup of stored information during the startup program for each user session. And since this happens in the context of a read only session, this would happen every time for all user sessions. So be sure to scope out the extent of cleanup required at startup if you want to make this a comprehensive solution.
    *********************** Program pseudo code begin ***********************
    " Find out current user details (username, group etc.) using sysinfo
    limit all dimensions to members at stored levels
    limit dim1 to members that user does *not* have access to.
    NOTE: This can lead to perf issues if PROD dimension has thousands or millions of products and current user has access to only a few of them (2-3 say). We will have to reset the stored information for the majority of products. This is undo-ing the effects of the data load (and stored summary information) dynamically at runtime while the users request a report/query.
    limit dim1 add descendants using dim1_parentrel
    limit dim1 add anscestors using dim1_parentrel
    limit dim1 keep members at stored levels alone... use dim1_levelrel appropriately.
    same for dim2 if reqd.
    "If we want to see runtime summation for stores in North America (only visible info) but see the true or actual data for Europe (say).. then we need to clean up the stored information for stores in North America that the current user does not have access to.
    Scenario I: If Cube is uncompressed with a global composite.. only 1 composite for cube
    set cube1_meas1_stored = na across cube1_composite
    set cube1_meas2_stored = na across cube1_composite
    Scenario II: If Cube is uncompressed with multiple composites per measure.. 1 composite per cube measure
    set cube1_meas1_stored = na across cube1_meas1_composite
    set cube1_meas2_stored = na across cube1_meas2_composite
    Scenario III: If Cube is compressed but unpartitioned..
    set cube1_meas1_stored = na ... Note: This can set more cells as null than required. Each cell in status (including cells which were combinations without data and did not physically exist in any composite) get created as na. No harm done but more work than required. The composite object may get bloated as a result.
    Scenario IV: If Cube is compressed and partitioned..
    Find all partitions of the cube
    For each partition
    Find the <composite object corr to the partiton> for cube
    set cube1_meas1_stored = na across <composite object corr to the partiton>
    "Regular Permit Read code
    cns product
    permit read when product.boolean
    *********************** Program pseudo code end ***********************
    The cube in our aw was uncompressed/partitioned (Scenario I).
    It is more complicated if you have multiple stored levels along a dimension (possible for uncompressed cubes) where you apply security at an intermediate level. Ideally, you'll need to reset the values at load/leaf level, overwrite or recalculate the values for members at all higher stored levels based on this change, and then exit the program.
    HTH
    Shankar

  • Open page in a new window..

    I am making a website and I would like to make a few links on
    it open in a new window. I know the 'target ._blank' code, but here
    is my question....
    If someone has a pop up blocker, will they not be able to see
    a new window pop up? Is there a different code I could use so it
    won't be seen as a 'pop up'???

    Al Sparber- PVII wrote:
    > I simply wanted to state for the masses that "DOM
    Scripting" is nothing
    > more than a non-standard term to describe certain
    approaches a
    > programmer might take in a JavaScript, and that the
    approach really does
    > not need to be named - unless one wants to make a point
    or to set his
    > script apart in some way from older scripts. It is not a
    new or
    > different technology.
    After a night's sleep, i decided to respond to this
    statement. Al is
    certainly entitled to his opinion. But for anyone who might
    still be
    following this thread, as I will demonstrate, it is simply
    incorrect to
    say that Dom Scripting is nothing new. It really is.
    But Al chose his words carefully. Is it new "technology"?
    Perhaps not,
    depending on how one defines "technology". But to spend time
    parsing
    that term is to overlook the larger issue that Al is making.
    He has
    argued here, and many times elsewhere, that those who
    advocate DOM
    Scripting are perpetrating some sort of hoax, suggesting
    those scripts
    are somehow better than other scripts.
    I would make such a suggestion at all. There is an obvious
    need for
    both sorts of scripting. I myself have created many of each,
    and each
    has a perfectly legitimate purpose. But I do recognize that
    they each
    use their own sets of tools and techniques. They are simply
    different,
    one from the other, even if we have the luxury of using the
    same
    scripting language for both - just as use the same language
    to write
    poetry or prose, to write essays or novels.
    Before getting to DOM scripting, let me begin by holding up
    AJAX -
    ignoring for now its merits and demerits - as an example of a
    technique
    that is (or was) new and different. It uses javascript to be
    sure, but
    what made it new and different was that it uses Objects and
    Methods that
    browsers did not (fully) support until recently. So it is not
    the
    javascript that is new and different, it is the Objects and
    Methods that
    are. In addition, over time, developers worked out better,
    more
    powerful and more effective ways of using them. Like AJAX or
    dislike
    it, it is new and different and allows us to do on the web
    things that
    could not be done before.
    So too with Dom Scripting. Again it (usually) uses
    javascript. But what
    makes it new and different is that it employs Objects and
    Methods that
    modern browsers added only recently. They are new and
    different. They
    (or access to them anyway) were simply not there before.
    Take a step back and look at DHTML. It was a valiant attempt
    to do what
    we now do far better with DOM Scripting. It used not the
    Document
    Object Model (DOM) but the Browser Object Model (BOM). It was
    a brave
    and clever few who could tame the wild and wooly
    incompatibilities
    created by each browser manufacturer with their unique BOMs.
    It too was
    new and different, as it was manipulating the BOM in ways
    that had not
    been done before.
    Then, as a result of a large and vocal cadre of
    forward-thinking
    designers, the browser makers finally created browsers that
    stuck
    (some better than others) to defined Standards, and added
    real support
    for the various DOM Objects and their Methods. It is these
    that DOM
    Scripting manipulates, and it is these that distinguish it
    from DHTML
    (BOM) and non-DOM-Object-manipulating javascripts taht came
    before. And
    over time clever folk have figured out better and better ways
    of using
    it, with ever-more-clever scripts.
    DOM Scripting is not rocket science, but to argue that it is
    not new and
    different simply ignores the fact that it is a methodology
    for
    manipulating Objects and Methods that simply did not exist
    before. It
    is based on a new and different paradigm, with a new set of
    tools and rules.
    Al Sparber wrote:
    > does not need to be named - unless one wants to make a
    point or to
    set his script apart in some way from older scripts It is not
    a new or
    different technology.
    As I've shown above, in fact it IS new and different. The new
    Objects
    and Methods that modern browsers support *make* it new and
    different.
    They open amazing opportunities for creating useful,
    interesting, and
    powerful web sites (that are more likely to be cross-browser
    compatible,
    too.)
    One does have to take the time to learn the details before
    diving in. If
    you are interested in learning more (and I for one have
    *lots* more to
    learn), there are some good books, among them
    Jeremy Keith, "DOM Scripting" (
    http://domscripting.com/),
    Peter-Paul Koch, "PPK on Javscript" (
    http://www.quirksmode.org/),
    and many, many more.
    There are also a zillion websites devoted to the subject.
    First you
    might want to read the Wikipedia entry,
    http://en.wikipedia.org/wiki/DOM_Scripting,
    and then perhaps
    http://www.webstandards.org/action/dstf/,
    http://www.w3.org/DOM/
    and links at
    http://clagnut.com/blog/364/,
    to name just a very few.
    E. Michael Brandt
    www.divaHTML.com
    divaPOP : standards-compliant popup windows
    divaGPS : you-are-here menu highlighting
    divaFAQ : FAQ pages with pizazz
    www.valleywebdesigns.com
    JustSo PictureWindow
    JustSo PhotoAlbum

  • Sequence PK generation is messed up after table import

    Thought I'd post this here where the possibility of direct object manipulation is what it needed.
    Recently created new 10.2.0.1.0 database on new machine as a copy of one running on another machine. Installed Apex 2.2 and used the data import feature within to transport tables from the one machine to the new one. All tables were imported exactly as they were in the original DB. Primary Keys and column types are set exactly the same as the orig along with exact same AUTOID_PK column definition. New sequence was created and associated with correct PK column as in orig. Data import was all successful.
    Then imported the associated Apex Application from orig DB, exported as 2.2 into new 2.2 Apex.
    Problem occurs now when attempting to add new records, getting a PK violation apparently due to duplicate keys being created at insertion time from the PK's associated sequence.
    I've done this same procedure in the past and not had this problem.
    Using SQL Developer to look at the associated PK index I see that the index sees 205 distinct keys, 205 rows, and a sample size of 205.
    ++++ Does the problem stem from the next sequence value being generated is 206 ???? +++
    I'm not a pro with the functionality of Oracle as you may be able to tell.
    The duplicate AUTOID value may come from the fact that although there are only 205 records and 205 distinct different AUTOID values, the values themselves do not run from 1 to 205 since there have been many deletions and additions. The highest value in the AUTOID field is 376, and I would assume the sequence needs to start generating the next value from 377 forward so as not to cause this PK violation.
    That is just a deduction rather than possible fact, can anyone help with identifying the problem if I'm wrong, and can I get some help correcting this so I can continue on from where I left off?
    This is a problem in 2 tables at the moment, shouldn't be any more.
    Thanks to anyone who can help.
    Jacob

    If I understood you correctly, sequences were precreated on target db before running import. If so, that is where problem was created. Oracle exports sequence definition not with original START WITH value but rather with START WITH last_number. This ensures imported sequence will not generate already generated values. You, most likely, precreated sequences with START WITH 1 and used imp with IGNORE=Y. As a result sequences started to generate duplicate values.
    SY.

  • Serialize DOM between two ejb's performance

    Hi,
    I am working with XML in a ejb container and I used JDOM Document (org.jdom.Document) to send the xml document between two ejb's, I picked JDOM because the Document implements the Serilizable interface. Then I did some performance tests on sending JDOM between two ejb's. I also did tests on sending a String rep. of the XML between the same ejb's. The result supprised me very much.
    Here is the testing code:
    // init
    dom.loadFromPath("C:\\booklist.xml");
    String xml = dom.getXML();
    org.jdom.Document doc = db.build(dom.queryDocument());
    // ... ejb lookup etc...
    // ejb test
    long start2 = System.currentTimeMillis();
    for ( int i = 0 ; i < 1000; i ++ )
    test.jdomTest(doc); // jdom doccument
    long end2 = System.currentTimeMillis();
    // string test...
    long start1 = System.currentTimeMillis();
    for ( int i = 0 ; i < 1000; i ++ )
    test.stringTest(xml); // String
    long end1 = System.currentTimeMillis();
    and the results:
    JDOM: 41703 msek
    String: 313 msek
    Send String and load him into a DOM in the receiving ejb: 5266 msek.
    I have always thought that working with xml on a string format would be to much overhead.
    Could somebody clearify this for me? Does this test results make any sense?
    - thanks, Lubbi Tik

    The serialisation process works at the object level, so to serialise an object you must first serialise every attribute of that object. The specification for serialisation states that every object which wants to be serialisable must have attributes which are primative types (eg int, long etc etc) or attributes which are objects which are serialisable. So to serialise a JDOM you must serialise each of the child nodes in the document, it would be a lot of object manipulation in this example. To serialise a String however, is much easier as it is a much less complex type.
    Cheers,
    Peter.

  • Rman - "report unrecoverable" concept

    Hi
    I am readin g a book on rman (rman recipes 11g from APress).
    I m not able to understand what is the purpose of "report unrecoverable" command. Book says:
    >
    Ref: Section 8.3 - page# 229:
    You want to identify which datafiles have been affected by unrecoverable operations, since RMAN needs to back up those files as soon as possible after you perform an unrecoverable operation.
    Use the report unrecoverable command to find out which datafiles in the database have been marked unrecoverable because they’re part of an unrecoverable operation. Here’s an example showing how to use the report unrecoverable command:
    RMAN> report unrecoverable;
    Report of files that need backup due to unrecoverable operations
    File Type of Backup Required Name
    1 full /u01/app/oracle/data/prod1/example01.dbf
    RMAN>Could someone please give example of unrecoverable operation(s)? And how could rman/oracle know before hand if such an unrecoverable operation is going to be performed on certain datafile(s)?
    Thanks

    user12033597 wrote:
    Hi
    I am readin g a book on rman (rman recipes 11g from APress).
    I m not able to understand what is the purpose of "report unrecoverable" command. Book says:
    Ref: Section 8.3 - page# 229:
    You want to identify which datafiles have been affected by unrecoverable operations, since RMAN needs to back up those files as soon as possible after you perform an unrecoverable operation.
    Use the report unrecoverable command to find out which datafiles in the database have been marked unrecoverable because they’re part of an unrecoverable operation. Here’s an example showing how to use the report unrecoverable command:
    RMAN> report unrecoverable;
    Report of files that need backup due to unrecoverable operations
    File Type of Backup Required Name
    1 full /u01/app/oracle/data/prod1/example01.dbf
    RMAN> Could someone please give example of unrecoverable operation(s)? And how could rman/oracle know before hand if such an unrecoverable operation is going to be performed on certain datafile(s)?
    Thanks A datafile or tablespace are set to unrecoverable if any operation that had been performed (since the last backup taken on these datafiles) on datafile or tablespace that are unrecoverable.
    A little explanation:
    When ever a operation occurs on database i.e. any dml occurs on tables, these operation generates REDO information in redolog files. So when someone explicitly disable the logging information to redologfile(by hint no_logging), then datafile is marked as unrecoverable.
    Oracle says it unrecoverable because oracle cannot perform recovery by reading from redologfile or archivefiles if datafile crashes and need a recovery. Thats why rman warns you about such operation which could have taken place and point you to datafile which had these operation performed. The operations which are unrecoverable are:
    1) direct load/SQL load
    2) direct-path inserts result from insert or merge statement
    3) ALTER TABLE commands
    4) CREATE and ALTER INDEX commands
    5) INSERT /*+APPEND*/
    6) partition manipulation
    7) database object that has explicitly set with nologging option
    8) Oracle eBusiness Suite concurrent job execution identified in Oracle metalink note: 216211.1
    8) Oracle eBusiness Suite patches activities that involve database object manipulation
    9) SQL*loader with nologging
    So once you take the full backup of datafiles (which are affected by these operation) will clearout the Unrecoverable warning. Because now oracle has the backup of these files.
    What preventions you can take for not making datafile to Unrecoverable?
    Use force_logging
    Also see
    http://www.pythian.com/news/7401/oracle-what-is-an-unrecoverable-data-file/
    Hope this help you in understanding

  • Problems with Photoshop rendering text in 3D

    Can anybody out there help me with this. It seems each time I use the 3D function in PS CC, and directly after extrusion my text appears rather deformed, mostly curved characters are flattened somewhat. I tried upping the dpi but I still end up with the same result...attached is an aexample of what I mean. Very frustrating.@

    If counting backwards works, then use that. Some object manipulations may change the internal state of ID's object counters, and you'll get unreliable results trying to work from start to end.
    An afterthought: did you have anchored objects in your original text? Text inside these also count as a separate story, but it's removed when you empty out its parent story -- thus, possibly, throwing off the original counts.

  • Strategy for database exceptions

    I am designing our strategy for handling database exceptions which may occur upon saving for instance.
    For example, if a user tries to save an object which violates a unique key constraint, I get that error wrapped by the DatabaseException class as documented.
    The wrinkle is that I would really like to report to the user which fields caused this violation. We are using an oracle database for the foreseeable future, so portability is not critical.
    Has anyone designed something which can get this metadata and attach it to the exception? Any tips would be much appreciated.
    Cheers,
    craig

    I have found it very helpfull to separate the database / recordset from the rest of the project. If you create objects that model your records then the rest of your program doesn't know or care where the data came from. You could easily write store / retrieve methods to deal with the data from a file, over the web, from a socket connection, etc without having to alter your entire app. My advise is to do as little manipulation on the resultsets and focus on object manipulation instead. Mind you if your writing some sort of generic recordset "explorer" then this doesn't apply. You can't possibly model objects from some random recordset that you have no previous knowledge of. In that case you'd have to examine the meta data of the recordset to get field names, data types, etc.

  • "Simple" Project - Can't figure out Square Pixels, Codecs, etc.

    Just bought FCE thinking that making an iPhone demo video would be easier and have spent hours so far unable to get past the first (basic?) step.
    I would like to make a quicktime video for the web at 600x400 pixels (aspect ratio of 1.5). Have recorded a .mov video at 320x460 that I want to display at full-resolution in that 600x400 window when zooming/scaling from within a background that (I thought) would initially be 1500x1000 (1.5) pixels with a 320x460 inset. That's all, and all with square pixels.
    The trouble is, I cannot seem to find the right combination of how to size and design my background graphic (with Photoshop CS3, trying all combinations of pixel ratios) so it will fill up the canvas precisely, and I cannot find the right permutation that will let me fit my 320x460 video precisely. I have found combinations (sizing in Photoshop with pixel ratios, changing the size of the artwork, change pixel properties of the sequence in the browser, etc.) that work for the background and the inset video but nothing where they both work together.
    So, what's the secret to just making web videos using videos and background images together? This would seem to be a simple matter and I must be missing something. Thanks.

    My issues are not so much with functionality and learning such functionality (I think it's all there if you can find it) but with the User Interface.
    I expect when I select a clip in the Sequence timeline that the clip will appear in the Viewer since that is what I indicated I would be working on. The windows (viewer, timeline, canvas) are all literally disconnected. You think you're working on one thing, but you're not.
    I expected that when I added a motion keyframe in the Canvas, that the keyframes would be instantly marked in the timeline and in the viewer for all items (to coordinate concurrent scaling/effects).
    When you look at properties, the drop downs don't even show the currently selected property but just show the top one in the list (bad, bad programming and UI)
    The wireframe being called wireframe instead of 'handles' is bad.
    When scaling a clip or picture with a wireframe in Canvas, using the Shift key usually constrains the scaling to proportional sizes in EVERY graphics program I have ever used, except for FCE where using the shift key while dragging lets you distort the graphic/clip. No attention to common standards for object manipulation. Bad.
    The lack of the ability to specify precise adjustments (like to the pixel) in the Canvas (or anywhere) is bad. When I was scaling a clip in the canvas, I could not see behind the colored wireframe and had to rely on trial and error so see if my scaled clip fit perfectly. You could not even zoom to anything but the presets. Terrible.
    I'll stop.
    Video pros should be using FC Studio with huge learning curves, but at $199, FCE will of course be used by folks who want a step up from iMovie. The focus on "video" instead of web "movies" with pizazz is an huge oversight in the documentation.

Maybe you are looking for

  • Query related to Service Entry Sheet for maintenance order

    Dear Gurus, I am little bit confusing while entering SES for maintenance order, Do we need to enter both Maintenance Order# & Cost Center# in Account assignment tab of SES? What is impact of entering both in SES? What is standard procedure of accepti

  • Zen touch froze, now i'm stuck in recovery m

    my zen touch froze, after idle shutdown i turned it back on and it froze again. the next day, i turned it on, and it showed my libary as being empty. i tried to reload some music onto it, but one of two error messages kept coming up; one saying that

  • Anyone successfully using Btrfs with dm-cache - any howtos

    Hi Does anyone successfully set up dm-cache (or flashcache) on arch? If so is it performing well and stable for you? and how did you set it up (nothing yet on the wiki)? thanks in advance

  • Buying used MacBook Air @craigslist

    Hi there, I plan on buying an MBA from craigslist and would like to know if you folks have any advice on buying a used MBA. I'm making a check list on what to inspect when I see the MBA. Obviously, google is gold, but there is so much content there t

  • Is there a way to fix an iphone that has a maximum capacity of 600 mb?

         what happened was that my iphone 4, "8 gigibytes" was running slow so i wanted a fresh start. put it in itunes, restored it and when i tried using it. a message kept popping up saying that my storage was almost filled. then when i went to see wh