Nitty Gritty

Okay, after carefully inspecting my logic board, I found that the previous owner had blown a power management chip (it was literally burned). I have scouted the internet and found a replacement for it, but there is now another issue. At location C383, there appears to be a part missing. I believe it is probably a capacitor. With this all in mind, I was wondering if anyone would know where I can find a schematic for the iBook Dual USB logic board, or at least a part list by board location. I know it's a pretty tricky thing to find, as I have been looking for days, but I was wondering if any of you would be able to help me. Oh, and I realize that my iBook is out of warranty, so that is why I'm doing all this work by myself.
Thanks for your help!!!
-Marty

If you look on the logic board (from the bottom of the laptop) you can see where the DC board connects with the wire ribbon to the logic board. C383 is about a half an inch down and to the right (if the CD-ROM drive is up) from this connection. I appreciate the help!

Similar Messages

  • Need nitty-gritty help uninstalling

    My computer crashed during an iTunes upgrade... so then, I've got a/some critical file(s) missing. ITunes can't run w/o the file(s), can't uninstall (using the Windows uninstaller) without the file(s), and can't re-install because it detects a previous, undeleteable installed version.
    I want to get all remaining traces of iTunes off of my computer so that I can, well, install iTunes.
    Help? Please? I don't want to have to completely wipe my drive and reinstall everything to expunge all traces of this thing.

    Hey Kimberly Stedman,
    Try going through the steps in this article to resolve:
    http://docs.info.apple.com/article.html?artnum=93976
    There is a step to assist in "Clean up iTunes installer files on the computer" that is important to try.
    Hope this helps,
    Generik
    PowerMac G4/Dell Precision WS 370, XP Pro   Mac OS X (10.4.7)  

  • BGP decision algorithm nitty-gritty (relationship of locally originated routes to weight attribute)

    Hello everyone, i have a question on this algorithm. Specifically the relationship between (cisco specific) WEIGHT which is right at the top of the path selection algorithm.... and routes that are ORIGINATED_LOCALLY (3rd one down, after weight and local_pref). 
    Heres the relevant steps of the decision tree: 
       1/WEIGHT (highest wins)
       2/LOCAL_PREF (highest wins) 
       3/ORIGINATED LOCALLY (prefer locally originated over peer learnt) 
    Whats confusing to me is that Jeff's book tells us that if a prefix is ORIGINATED_LOCALLY (ie entered into BGP on that same router - either by a network/aggregate-address statement, or from redistribution) then its WEIGHT will also be set to 32768 (as opposed to a BGP peer learnt route whose WEIGHT is set to 0). I understand this. 
    My question is why??? Seems to me that if this is the case there is little purpose of having ORIGINATED_LOCALLY in the decision tree at all, as the logic will never get there on account of the the propagation of its value into (the higher up) WEIGHT decision. This also in turn means that ORIGINATED_LOCALLY has the power to override the attribute LOCAL_PREF.... so couldn't this whole logic be simplified to be: 
       1/WEIGHT or ORIGINATED LOCALLY
       2/LOCAL_PREF (highest wins) 
    This very thing has confused another user on another post too, that user writes:  "I tried thinking of an example where "ORIGINATED LOCALLY" works but weight doesn't, but couldn't think of any."
    looking forward to the thoughts of this community.
    Thanks in advance, Keiran. 
    PS> perhaps the attached diagram will help visualise this. 

    Thanks for your reply shaikhkamran123, i hadn't considered the multivendor environment (where cisco specific concept of 'weight' would be irrelevant to those routers), so yes their decision would start with: 
    1) Local Preference
    2) Locally originated
    as opposed to cisco
       1/WEIGHT (highest wins)
       2/LOCAL_PREF (highest wins) 
       3/ORIGINATED LOCALLY (prefer locally originated over peer learnt) 
    but it still doesn't really explain why cisco chose to alter their inbuilt weight based on if a route was locally originated. This alters the logic of the above decision algorithm: ie if its locally originated, it will set a high weight (32768), which will be preferred.... and heres the main thing *BEFORE* local_pref is even looked at.  So in other words decision criteria#3, gets merged into #1, skipping ahead of #2.  Am i going crazy here?? 
    thanks in advance all... 
    K. 

  • What is it about this markup that breaks DW CS3 Design View Editing?

    Folks:
    I'm attaching very simple --and apparently completely valid-- markup that generates the text shown between the markers below:
    ------------start----------------
    yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda yadda badda.
    ------------end----------------
    As you can see by examining the file, it contains only an span enclosing text and spaces, and some following breaks.  This file passes DW validation: no errors or warnings.
    If I open this file in DW CS3 Design View, select and delete the last word, "badda", the entire text --every "yadda"--  is also deleted.  All that text is really gone -- if I immediately save the file and re-open it, there is no text content at all, just the span pair,  spaces, and the trailing breaks.  This is completely repeatable.
    When you edit this file, do you get the same result?
    The attached file is a cut-down, simplified version of a production file that exhibits this issue. (That file also passes validation.)   When I delete single  words,  entire content paragraphs disappear.   So this is not just a point of curiosity.  I need to figure out how to fix the production code and avoid this problem in the future.
    What is it about the attached markup that causes this issue?  I've fooled around with this code quite a bit and the best I can figure out is that DW is confused when both the start- and end- span tags are followed immediately by a line break.  Inserting a space in either place makes the issue go away.   But the production code doesn't have any occurances of this pattern. And, anyway, line-breaks have no significance to html, just for source-code formatting, right?
    Am I missing something incredibly obvious?
    TIA
    hen3ry

    Nancy O:
    Thank you for your response, which gave me  valuable clues to understanding the issues.   (Specifically, it led me to re-read and understand better the nitty-gritty of the HTML4 specs,  section 7, "The global structure of an HTML Document", especially 7.5.1 The BODY element.)
    I believe you are saying --in sum--  that to be reliably edited in DW, each source file must comprise a syntactically correct and complete HTML Document. 
    I am attaching a second file, "badcode2.html", modified substantially as you suggest.  It passes the DW validator, the validator.w3.org test, and the www.totalvalidator.com test.    No errors or warnings. 
    I do this:   Launch DW, open this file, choose Design View if necessary, select the final word of text, "badda", and activate Edit-->Cut (or Delete).   As before, all the other text is deleted as well. This is repeatable. 
    It seems to me this sample code satisfies your general principle.  Could I be misapplying the three validation tests?  Or that passing these tests does not assure the document  is syntactically correct and complete?   Can you recommend other tests? 
    I'd like to add the following two points as a matter of background and a bit of niggling: 
    --I'm aware  that complete HTML pages must contain <html> , <head>,  and <body> tags, although the HTML4 specification seems to say <body> tags are optional.   All my production pages, as served, contain these tags.   My underlying design is a php template with individual content files incorporated by inclusion.   There may be only one set of <html> , <head>,  and <body> in  a  page. The  "outer" template code provides these tags, and individual content files cannot contain a nested set -- so they must be "bare" markup.   Is DW able to support this design, in which "bare markup" files are seemingly unavoidable?    Is there a mechanism analogous to DW Design-Time Style Sheets to provide virtual existence of these tags so "bare markup" files can be successfully edited?  I've looked for such a mechanism but not found one.    Or some completely different method?   
    --You specify the inclusion of <p> or <h1> tags within the  body.    Is there some special significance of <p> or <h1>  with respect to stable editing in DW?  My reading of the HTML4 spec indicates that a single block element is the minimal requirement for body content. Either of these tags qualify, right?   But so does <div>, and that's what I use in my current example.    Am I misunderstanding something? 
    Bottom line,  I'm looking for a pragmatic solution for my problem:   Once in a while, among hundreds of  structurally similar "bare markup" pages I edit in DW without any problem,  I lose data.   If inserting an additional tag into all my content include files eliminates these occasional problems, I'm  willing to do that, as long as:  (1) There is an arguable technical basis for the addition,  (2) the added tag does not  produce anything visible on the served page, and (3) the validators don't flag the extra tag.     
    Suggestions, please! 
    Thanks, 
    hen3ry

  • Batch won't run unless Illustrator CS3 is active window?

    I'm hoping someone can help me out with this as it's driving me nuts. Basically I'm trying to batch an action I made in CS3 to run on about 700 illustrator files.
    Everything runs just fine and dandy until I switch from Illustrator to another program (Firefox, photoshop, bridge, anything other than illustrator), once illustrator is no longer the active window, the batch process stops. When I make illustrator the active window again, the batch starts again. This worked perfect in CS2 and I never had this problem until CS3.
    This is a huge pain as I can't work on anything else while keeping the batch running in the background.
    Here's the nitty gritty details:
    XP SP2, Illustrator CS3, smoking fast machine
    Action was created in actions palette to do the following:
    Unlock All > Select All > Create Outlines > Convert to Grayscale
    Ran the batch via the actions palette, choose "Save and Close" and "log errors to a file".
    I've restarted Illustrator, restarted the machine, recreated the action, broke the folder into smaller folders with less files. It's making my Tuesday feel like a Monday.
    I've search all over and found nothing. There was one post here on the adobe forums, but it was from October and it never got answered. If you have a solution you will be my hero!
    Thanks!

    Yes, I have absolutely seen this! At our office we batch process Illustrator files to JPEG very frequently, and it's a huge nuisance to have Illustrator block the system during the batch processing. Everyone here is on XP SP2 as well, and we all have this issue. In fact, trying to switch in and out of Illustrator while it's batching can cause the application to crash fairly regularly for us.
    I can confirm that the Mac version of Illustrator CS3 does *not* have this problem, as I am batch-exporting right now with Illustrator in the background on my Mac. (Mac OS X 10.5.2, Illy 13.0.2)
    A fix for this would be most appreciated, Adobe!

  • Oracle database 11g release 2 installation problem on windows 7 (64-bit)

    First of all my windows is not genuine, but on my friend's desktop oracle download and installation  worked fine, he chose "create and configure database" options, and it works very well on his desktop, though his windows is also illegitimate. In my case, when I select "Create and configure database" option and pressed 'next",
    (Go to my blog to see it with snapshots: Computer Science: Oracle database 11g release 2 installation problem on windows 7 (64-bit))
    it asks to select class, I select "Desktop class" and pressed "next". The moment I pressed "next", the whole setup thing disappeared like it was never started. I searched for all possible reasons for why its not getting installed on my laptop, I used registry cleaner s/w,  deleted 25 GB of data to create free space if it were the problem, increased the virtual memory to increase the space for RAM, I did almost everything to get this setup working, but I found no success with the "Create and Configure database" option
    and
    then
    I chose a "database software only" option and chose to store in a folder w/o spaces. This way, I got database s/w only and then later I found "Database configuration Assistant (DBCA)"  from windows START button and clicked to create and configure database manually. The steps are pretty much interactive and doesn't involve much brainstorming.
    The values I filled for
    1) Global Database Name :  orcl
    2) System Identifier : orcl
    3) I chose common password for both SYS and SYSTEM
    4) while on Enterprise Manager Configuration step, It asked me to create and configure listener in oracle home, so for that too, I typed "netca" in windows START menu and clicked it. There I added a listener.
    5) I chose a Storage area which was the Oracle-home itself i.e. where our installation files goes , in my case it is : C:\oracle_base\product\11.2.0\dbhome_1\oradata
    6) Then after few more nitty-gritty clicks, we are set to go !
    Finally to write SQL code and to create your first TABLE , type "sqlplus" in windows "START" menu and click it when it appears. A command-prompt like window appears , which will ask you for username and password, so here they are :
    Username : sys/ as sysdba
    Password : (its the one you created in step 3 stated above )
    After this you are ready to write your first SQL command.

    Is this your solution to your original post at Oracle database 11g release 2 installation on windows 7 (64-bit) ?
    Pl be aware that you should not create any custom objects in SYS or SYSTEM schema - you should create any such objects in a separate custom schema.
    About Database Administrator Security and Privileges

  • New to PrPro with specific workflow questions

    Hi, All!
    I'm a 20+ year video veteran wading into the Premiere Pro pool this year. I have an extensive Avid and FCP background. Please don't hold that against me! I also realize this is long, but I'm trying to cram everything in so you can see where I've been to possibly re-route me, make it understandable, and at least a wee bit interesting.
    I'm currently running PrPro 5.5.2 on a MBP 2.8 GHz Intrel Core 2 Duo with 4GB RAM. My media storage is a 2TB Sonnet Tempo RAID partition over eSATA to an Express34 card on the MBP.
    I've sucessfully been doing AVC-Intra projects in PrPro for the last few months. This new project is different, and I want to see if there is any way to make my process more efficient. It's a very simple project, but it's very long. I'm finding as I do my workflow tests that the render, export and conversion process I had been doing for the short AVC-Intra projects (which were in the 5-7 minute range) are going to be very time consuming on this project, which has a hefty 2 1/2 hour run time over 14 modules. The editing is simple, if not cookie-cutter. It's creating the deliverables that's going to be the real bear here. I'm hoping your collective experience can offer me a better workflow that what I've cobbled together from my other knowledge and previous forum searches.
    The nitty gritty:
    These videos  are basically putting a narrator to PowerPoint presentation and prettying it up with a music stinger at the open and close and adding a nicer-than-PPT transition betweeen the slides. I have an AIFF audio track for narration, AIFF music file from SmartSound, PNG graphic files (converted PowerPoint deck), & an animated Apple PNG transition. (from the Digital Juice people. It's one of their "Swipes".)
    I started with a AVC-Intra timeline because that seemed to me to be the highest quality. I understand this was probably my first error. The renders are taking about run time for the module. With these elements and the following deliverables, what Sequence Settings do you recommend I should be using?
    My deliverables will be myriad, which is why I want to export out a high-quality master file out of PrPro and then do the mulitple conversions to other sizes & codecs in a compression app. I have Encore, Sorenson Squeeze, and Compressor at my disposal. I'm guessing Encore will be my friend here, but it's also the one I know the least.
    I tried exporting out a ProRes file. That took over 40 minutes on a 14 minute module. Then I converted that to a 640 x 360 H.264 to upload as an approval file. That took about 35 minutes. (I used Compressor becasue I have a droplet craeated for that and it's what I have been sucessfully been using to make small, good-looking files that my clients can reasonably download or stream).
    Next I tried exporting out the highest quality H.264 at 1920x1080 and tried converting that using my Compressor droplet. That took 50 minutes to export out of PrPro, but it saves me one step of converting to a full size H.264. What I'm not sold on is whether or not that file is the best file I should be using to make the other deliverables.
    What I will eventually need to deliver is a family of H.264 in various sizes from 1920x1080 to 320x180 & an authored DVD. So we're talking hours of processing for each one if I go with what I know.
    Since the only thing that moves in these videos are a handful of animated transitions (10-15 per module) and a fade in & fade out, my gut tells me I could make this happen faster if I only changed a few things in my workflow. I'm just at a loss as to what those things might be.
    Another little note. The client will wnat to  custominze these with different logos, so I will have to be able to get in, add or change a logo super, and re-export and re-convert all 2 1/2 hours multiple times. A more efficeint way to do that would be very helpful. Both for my sanity and my clients' budget.
    Any insight would be greatly appreciated! Thanks!!
    deb

    >avoid 24 hours of processing
    Adding to what was just posted
    The computer I replaced in 2010 was based on the Pentium4 CPU (Windows XP with CS3) and when I was done editing my SD video and ready to create a DVD, I would start Encore before I went to bed, so it would have an ISO ready for disc writing in the morning
    The computer I have now is described in http://forums.adobe.com/thread/652694 and my entire process of going from AVCHD to an SD AVI and then the encoding to ISO in Encore is just a bit over real time (maybe twice real time, but that is subjective, not based on a stop watch)
    CPU cores and speed plus lots of ram (12Gig for my motherboard... 16Gig for newer technology motherboards) and multiple fast hard drives ALL make a difference
    Your computer is, very simply, just barely able to run PPro... I have NO idea what is available in the Mac world, but to have "acceptable" speed, you need a new computer

  • Can I use Mac Mini as a "server" for a MacBook Pro and iMac?

    I want to develop a simple system for accessing my files from multiple devices.
    I heavily use iTunes (190 GB), iPhoto (120 GB) and iMovie (30 GB, but want to do more when I get time).  Additionally, I store lots of old videos on circa 10 portable hard drives (1 TB each), which I access occasionally. 
    The family use all this content and we currently have a MacBook Pro and are considering buying a new computer - either a new MacBook Pro or an iMac. We also have iPads, iPhones etc and a Time Capsule. We use wireless fibre optic broadband.
    I don't really want multiple iTunes and iPhoto libraries.  We want all our music and pictures in single master libraries that we can all access and update.
    As there is seemingly too much data to store on iCloud, we need to find an alternative solution. 
    My possible solution is:
    (1) Load my master iTunes, iPhoto and iMovie libraries ("iTPM libs") on a Mac mini
    (2) Buy a new MacBook Pro or iMac (probably doesnt matter which)
    (3) Link the MacBook Pro(s) and iMac to the Mac mini to access iTunes, iPhoto and iMovie and the master iTPM libs
    (4) Plug the portable hard drives into the Mac mini whenever I need to access the video content stored on the portable hard drives
    (5) Add "users" to the system (i.e. family members), in the same way you would access a shared server at work
    (6) Over time, buy more computers to access the system in the same way
    My questions are:
    (1) Is this system feasible?
    (2) If so, can I edit the iTPM libs from the MacBook Pro or iMac?
    (3) If so, what happens if both the MacBook Pro and iMac are accessing the libraries at the same time? Which takes precedent?
    (4) Would this solution be significantly slower than having separate libraries on the different computers?
    (5) Is there an alternative approach that might work better for my needs? (which are, after all, relatively straight forward).  When I went to the Apple store they were unsure!
    Many thanks.

    I heavily use iTunes (190 GB), iPhoto (120 GB) and iMovie (30 GB, but want to do more when I get time).  Additionally, I store lots of old videos on circa 10 portable hard drives (1 TB each), which I access occasionally.
    The family use all this content and we currently have a MacBook Pro and are considering buying a new computer - either a new MacBook Pro or an iMac. We also have iPads, iPhones etc and a Time Capsule.
    […] We want all our music and pictures in single master libraries that we can all access and update.
    The "update" portion of this is where your problems will arise.  You'd be better to consider your proposed system initially in terms of content "consumption", before getting into the nitty-gritty of editing shared content.
    iTunes has the Home Sharing feature built-in, which solves the content consumption issue.
    I don't know anything about sharing iPhoto libraries, but this KB article looks useful.
    As for iMovie, I think there are two things you want to do: share the raw data (sound effects, music, photos etc.) that are used to create iMovie content; and share the finished results.  Is that right?  I'm not sure how you'd go about doing that, but if you clarify your aims in this regard, it'll help the next person who comes along.
    When it comes to updating content, such as adding new music to iTunes or iPhoto, I'd suggest you do it through the Mac mini directly.  This seems the least problematic solution.  You wouldn't necessarily need a dedicated display for the Mac mini - you could use remote desktop or VNC or what-have-you.
    As for speed: doing anything across a network is always going to be slower than doing it locally.  But, it probably won't matter for content consumption (though if you're all watching different movies at the same time the drives on the server may struggle, and the wifi bandwidth might be tight).
    I hope this represents a good initial response to your query.

  • MSI P67A-C43 (B3) USB not working but theres a twist! Help!

     :lol_anim:I Have built a new rig with the MSI P67A-C43 (B3) Motherboard, 16GB DDR3 PC3-10600 1333MHz Memory (4GB x 4), Intel Core i5-2500K 3.3GHz 6M LGA1155 Processor, 550 Watt 80 PLUS Bronze Power Supply,24x DVDRW, 3.5" Internal Card Reader with 2 USB ports,    500GB 7200RPM 3.5" SATA Hard Drive(which i plan on upgrading just didnt want to waste money on the shitty HD they had.....
    So down to the nitty gritty  . My computer is running Windows 7 64 Bit, i have downloaded all of the updates and all of the Chip-set drivers, Bios, etc... i ran a system registry tool to clean up any errors in my registry and i have ALL of my drivers up to date.
    I have my USB mouse and keyboard that connect fine and are recognized by my computer, sometimes they stop working and start back up within seconds but windows does display any type of error or disconnection "this may have been fixed due to the registry cleaner i used i haven't had any hiccups yet" so i have my USB mouse and USB keyboard working fine and dandy now i think. got a new headset and the windows finds the USB Headset and it works Perfectly fine. BUT and this is where my whole frustration comes out. Whenever i connect my I-Pod Touch, I-Pod Nano, My Cell phone to copy files to from my SD Card, or my External My Passport drive Windows says "USB Device was not recognized" notification and in the Device manager it shows the one device with a yellow exclamation point and says "Unknown Device" I have tried to uninstall the USB hubs and reinstalled them perfectly restarted windows, still non of the devices work. i have tried making sure that windows tries not to turn off the device to save power, when i say i have tried almost everything i mean everything that is related with USB Device not Recognized. So i do not know what to do now, i have contacted the Manufacturer and they don't know squat about it or what could be causing it. the only thing i can think of is there is some hardware maybe that is not compatible i do hear some beeps when i start up i will have to come back and update how many beeps" i saw another post that it means the motherboard is trying to tell me something maybe this could be the issue. but i have tried every single usb port and non of them Recognize the device but when i plus in my mouse and keyboard in diff ports they still work so i am a bit confused. my last step is to return the computer to the manufacturer for a replacement or repair w/e i do not want to do this cause this should be working. my friend has almost the exact same MB and equipment and his works fine so i am getting a little frustrated with the issue Someone Pls Help me out. And yes i have tried turning off the computer, unplugging it and pressing the power button, and waiting like 1 Hour etc and still nothing. Drivers are Up-to-Date and Mother board bios and chip-set drivers are Up-to-Date as well. I hope someone knows or has an answer to help me out!

    Yes the Bios is Up-to-Date and they are not plugged into the Blue USB 3.0 ports(x2). oh and forgot to mention already did a clean install again of Windows 7 64bit and still nothing.  Im also wondering if the power supply is enough?? maybe?

  • I have a Sony NEX-VG10, edit with Prem Pro and am having serious capturing issues.

    I recently purchased the Sony NEX-VG10 and am really excited about it.  I have run into some serious problems capturing the footage, though.  I have a 2010 iMac, am running OSX 10.8.2  Mountain Lion and use Adobe Premiere Pro CS6 to edit.  I've discovered that the files on the camera are .MTS, a file format with which I am unfamiliar (not surprising, I'm not exactly a newb, but the past few months is the first time I've started editing seriously - I am more a writer/director/producer and could do basic stuff with FC7 and Prem Pro - now I own my own production company and am getting down to the nitty gritty). 
    But I have found that
    1. I can't capture using bridge, it doesn't recognize the file format.
    2. When I plug the camera into the computer and go straight to the files, I don't even get individual files.  All the information is contained in what looks like a Quicktime movie, but isn't recognized by Quicktime or any other program as anything.
    3.  When I go into the camera files using the import function in Premiere Pro, I find the individual .MTS files - which say they are Unix Executable files (no idea what that means), I can import them into PP, but the quality is atrocious.  Grainy and pixely.  Even when viewing them in full res playback mode.
    4. I bought Bigasoft Total Video converter (wicked cool program).  I import the files using PP, then go find them in this program, and convert them into .mov files - the quality is still atrocious (not surprised, as the .mts files look cruddy, what could I expect when I converted them - but was worth a try).
    I need to capture these files in full quality (oh - the file information says they are in 1920 x 1080 (1.0), so it's not like I filmed in a low res. 
    The software that came with the camera is for PCs, I've searched for answers to these problems, and although I've found similar, I can't find anything that actually helps me.  Can anyone out there tell me exactly what to do?!?  I would be most appreciative.  My new company's success depends on it!

    If you'll type mts into the search field for these communities, you will see a plethora of threads on the subject - I was going to link to one, but there are so many, it'd be best for you to read through several. In any case, of note is the fact that you might be able to import into iMovie directly as long as you do not shoot in 50/60 fps - in order to be able to read it, it has to be either 30fps for NTSC or 24 for PAL.
    Not sure if the frames per second problem has anything to do with your lousy quality problem, but it might. The other concern is that Sony has not and does not really play nice with anything Apple - that was one of the reasons I chose a Canon.

  • Paradigm Shift: the WDP Model & the Power to Bind

    As a developer coming from an OO/java background, I recently started to study and use the Java Web Dynpro framework for creating enterprise portal applications.
    Up to this point, I've developped 2 or 3 WDP projects - and in so doing, I've tried to reconciliate my java-influenced development methods with the SAP way of doing things. I'd say for the most part it was rather painless. I did, however, find a serious problem as far as I'm concerned in the way SAP has promoted the use of the java bean model importer.
    <a href="https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/u/251697223">david Beisert</a> created this tool and presented it to the SDN community in 2004 in his <a href="/people/david.beisert/blog/2004/10/26/webdynpro-importing-java-classes-as-model The same year (don't know if it was before or after), SAP published '<a href="https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1f5f3366-0401-0010-d6b0-e85a49e93a5c">Using EJBs in Web Dynpro Applications</a>'. Both of these works presented simplified examples of invoking remote functions on EJB backends (an add() function in the case of David Beisert's example, and a calculateBonus() function in the case of the SAP publication). Accordingly, they both recommended the use of the Command Bean pattern as an implementation strategy for their respective examples. Which I don't totally disagree with, in these particular circumstances. A simple execute() method is perfectly suitable if one needs to EXECUTE a remote function call - whether it be a calculate() method invoked on a EJB Session Bean or an RFC call made to some remote ABAP system.
    Problem is, not everything in life is a function call ! To me, it makes very little sense to model everything as a command if it doesn't match your business model. The needs of your application should dictate the architecture of your model and not the other way around.
    This unjustifiable fixation on the Command Bean pattern is probably to blame for the fact that very little up to this point seems to have been written on the subject of the power of the binding mecanism as a most powerful tool in the arsenal of the Web Dynpro developer.
    What's this ?
    Binding can make it possible to abstract away most of the nitty gritty context node navigation and manipulation logic and replace it with more intuitive and more developer-friendly model manipulation logic.
    There was a time when programs that needed persistence were peppered with database calls and resultset manipulation logic. Hardly anyone codes like that anymore.. and with good reason. The abstraction power of Object Oriented technologies have made it possible to devise human friendly models that make it possible for developers to concentrate on business logic, and not have to waste time dealing with the low-level idiosyncrasies of database programming. Whether it be EJBs, JDO, Hibernate... whatever the flavour... most serious projects today utilize some sort of persistence framework and have little place for hand-coding database access logic.
    I feel that the WD javabean model offers the same kind of abstraction possibilities to the Web Dynpro developer. If you see to it that your WD Context and javabean model(s) mirror each other adequately, the power of binding will make it possible for you to implement most of your processing directly on the model - while behind the scenes, your context and UI Elements stay magically synchronized with your user's actions:
    +-------------+        +-------------------+         +--------------+        +------------+
    |    Model    |<-bound-| Component Context |<-mapped-| View Context |<-bound-| UI Element |
    +-------------+        +-------------------+         +--------------+        +------------+
                           o Context Root                o Context Root
                           |                             |
    ShoppingCartBean <---- +-o ShoppingCart Node <------ +-o ShoppingCart Node
    {                        |                             |
      Collection items <---- +-o CartItems Node <--------- +-o CartItems Node <-- ItemsTable
      {                        |                             |
        String code; <-------- +- Code <-------------------- +- Code <----------- CodeTextView
        String descrip; <----- +- Description <------------- +- Description <---- DescTextView
    Let's examine an example of this concept. I propose a simple but illustrative example consisting of a shopping cart application that presents the user with a collection of catalog items, and a shopping cart in which catalog items may arbitrarily be added and/or removed.
    The Component and View contexts will be structured as follows:
       o Context Root
       |
       +--o ProductCatalog       (cardinality=1..1, singleton=true)
       |  |
       |  +--o CatalogItems      (cardinality=0..n, singleton=true)
       |     |
       |     +-- Code
       |     +-- Description
       |
       +--o ShoppingCart         (cardinality=1..1, singleton=true)
          |
          +--o ShoppingCartItems (cardinality=0..n, singleton=true)
             |
             +-- Code
             +-- Description
    Let's examine how a conventional Command Bean implementation of this component could be coded. Later on, I'll present a more object-oriented model-based approach. We can then compare the differences.
    public class ProductCatalogCommandBean
       // collection of catalog items
       Collection items = new ArrayList();
       public void execute_getItems()
          // initialize catalog items collection
          items = new ProductCatalogBusinessDelegate().getItems();
    This command bean will serve as a model to which the ProductCatalog node will be bound. This happens in the supply function for that node in the component controller:
    public supplyProductCatalog(IProductCatalogNode node, ...)
       // create model
       model = new ProductCatalogCommandBean();
       // load items collection
       model.execute_getItems();
       // bind node to model
       node.bind(model);
    No supply function is needed for the ShoppingCart node, since it is empty in its initial state. Its contents will only change based on the user adding to or removing items from the cart. These operations are implemented by the following two event handlers in the view controller:
    public void onActionAddItemsToCart()
       // loop through catalog items
       for (int i = 0; i < wdContext.nodeCatalogItems().size(); i++)
          // current catalog item selected ?
          if (wdContext.nodeCatalogItems().isMultiSelected(i))
             // get current selected catalog item
             ICatalogItemsElement catalogItem = wdContext.nodeCatalogItems().getElementAt(i);
             // create new element for ShoppingCartItem node
             IShoppingCartItemsElement cartItem = wdContext.createShoppingCartItemsElement();
             // initialize cart item with catalog item
             cartItem.setCode       (catalogItem.getCode());
             cartItem.setDescription(catalogItem.getDescription());
             // add item to shopping cart
             wdContext.nodeShoppingCartItems().addElement(cartItem);
    public void onActionRemoveItemsFromCart()
       // loop through cart items
       for (int i = 0; i < wdContext.nodeShoppingCartItems().size();)
          // current shopping cart item selected ?
          if (wdContext.nodeShoppingCartItems().isMultiSelected(i))
             // get current selected item
             IShoppingCartItemsElement item = wdContext.nodeShoppingCartItems().getElementAt(i);
             // remove item from collection
             wdContext.nodeShoppingCartItems().removeElement(item);
          else
             // process next element
             i++;
    From what I understand, I believe this is the typical way SAP recommends using Command Beans as a model in order to implement this type of simple component.
    Let's see how the two same event handlers could be written with a more comprehensive object model at its disposal. One whose role is not limited to data access, but also capable of adequately presenting and manipulating the data that it encapsulates. (The actual code for these model beans will follow)
    // I like to declare shortcut aliases for convenience...
    private ProductCatalogBean catalog;
    private ShoppingCartBean   cart;
    // and initialize them in the wdDoInit() method...
    public wdDoInit(...)
       if (firstTime)
          catalog = wdContext.currentNodeProductCatalog().modelObject();
          cart    = wdContext.currentNodeShoppingCart  ().modelObject();
    Now the code for the event handlers:
    public void onActionAddItemsToCart()
       // add selected catalog items to shopping cart items collection
       cart.addItems(catalog.getSelectedItems());
    public void onActionRemoveItemsFromCart()
       // remove selected shopping cart items from their collection
       cart.removeItems(cart.getSelectedItems());
    I feel these two lines of code are cleaner and easier to maintain than the two previous context-manipulation-ridden versions that accompany the command bean version.
    Here's where the models are bound to their respective context nodes, in the Component Controller.
    public supplyProductCatalogNode(IProductCatalogNode node, ...)
       node.bind(new ProductCatalogBean(wdContext.getContext()));
    public supplyShoppingCartNode(IShoppingCartNode node, ...)
       node.bind(new ShoppingCartBean(wdContext.getContext()));
    Notice that a context is provided in the constructors of both models (a generic context of type IWDContext). We saw earlier that our model needs to be able to respond to such requests as: catalog.getSelectedItem(). The user doesn't interact directly with the model, but with the Web Dynpro UI Elements. They in turn update the context... which is where our model will fetch the information it requires to do its job.
    Also note that a model is provided for the shopping cart here, even though it has no need to access or execute anything on the back-end. Again, the model here is not being used as a command bean, but rather as a classic object model. We simply take advantage of the power of binding to make ourselves a clean and simple little helper that will update for us all the relevant context structures behind the scenes when we tell it to.
    Here are the ShoppingCartBean and ProductCatalogBean classes (I've omitted a few getter/setter methods in order to reduce unnecessary clutter):
    public class ShoppingCartBean
       Collection items = new ArrayList();
       IWDNode    itemsNode;
       public ShoppingCartBean(IWDContext context)
          // initialize shortcut alias for ShoppingCartItems node
          itemsNode = context.getRootNode()
                             .getChildNode("ShoppingCart", 0)
                             .getChildNode("ShoppingCartItems", 0);
       public void addItems(Collection items)
          this.items.addAll(items);
       public void removeItems(Collection items)
          this.items.removeAll(items);
       public Collection getSelectedItems()
          return ItemDTO.getSelectedItems(itemsNode);
    public class ProductCatalogBean
       Collection items;
       IWDNode    itemsNode;
       public ProductCatalogBean(IWDContext context)
          // fetch catalog content from back-end
          items = new ProductCatalogBusinessDelegate().getItems();
          // initialize shortcut alias for CatalogItems node
          itemsNode = context.getRootNode()
                             .getChildNode("ProductCatalog", 0)
                             .getChildNode("CatalogItems", 0);
       public Collection getSelectedItems()
          return ItemDTO.getSelectedItems(itemsNode);
    Notice that both classes delegate their getSelectedItems() implementation to a common version that's been placed in the ItemDTO class. It seems like a good place to put this type generic ItemDTO-related utility.
    This DTO class could also have been used by the Command Bean version of the event handlers.. would reduce somewhat the number of loops. At any rate, the ItemDTO class shouldn't be viewed as an "overhead" to the model-based version, since it usually will have been created in the J2EE layer,for the marshalling of EJB data (see <a href="http://java.sun.com/blueprints/corej2eepatterns/Patterns/TransferObject.html">Data Transfer Object Pattern</a>). We just take advantage of what's there, and extend it to our benefit for packaging some common ItemDTO-related code we require.
    // DTO made available by the EJB layer
    import com.mycompany.shoppingcart.dto.ItemDTO;
    public class ItemDTO extends com.mycompany.shoppingcart.dto.ItemDTO
       String code;
       String description;
       public ItemDTO()
       public ItemDTO(String code, String description)
          this.code = code;
          this.description = description;
       // returns ItemDTOs collection of currently selected node elements
       public static Collection getSelectedItems(IWDNode node)
          // create collection to be returned
          Collection selectedItems = new ArrayList();
          // loop through item node elements
          for (i = 0; i < node.size(); i++)
             // current item element selected ?
             if (node.isMultiSelected(i))
                 // fetch selected item
                 IWDNodeElement item = node.getElementAt(i);
                 // transform item node element into ItemDTO
                 ItemDTO itemDTO = new ItemDTO(item.getAttributeAsText("Code"),
                                               item.getAttributeAsText("Description"));
                 // add selected item to the selectedItems collection
                 selectedItems.add(itemDTO);
          return selectedItems;
    Notice that the getSelectedItem() method is the only place in our model where context node navigation and manipulation actually takes place. It's unavoidable here, given that we need to query these structures in order to correctly react to user actions. But where possible, the business logic - like adding items and removing items from the cart - has been implemented by standard java constructs instead of by manipulating context nodes and attributes.
    To me, using a java bean model as an abstraction for the Context is much like using EJBs as abstractions of database tables and columns:
                         abstracts away
               EJB model --------------> database tables & columns
                         abstracts away
      WDP javabean model --------------> context  nodes  & attributes
    Except that a javabean model (residing in the same JVM) is much more lightweight and easy to code an maintain than an EJB...
    Before concluding, it might be worth pointing out that this alternative vision of the Web Dynpro Model in no way limits the possibility of implementing a Command Bean - if that happens to suit your business needs. You will of course always be able to implement an execute() method in your WDP Model if and when you feel the need to do so. Except that now, by breaking free of the mandatory Command Bean directive, you are allowed the freedom to ditch the execute() method if you don't need such a thing... and instead, replace it with a few well-chosen operations like getItems(), addItems(), removeItems(), getSelectedItems()... which, as we've just seen can add significant value to the javabean model made available to your WDP component.
    Comments would be appreciated on this issue (if anyone has had the time/courage/patience to read this far...;). Am I alone here intrigued by the potential of this (up until now) scarcely mentionned design strategy ?
    Romeo Guastaferri

    Hi Romeo,
    thanks for sharing this with the community. I am little bit surprised that the command pattern was understood as the only way on how to use the Javabean model in conjunction with EJBs. The command pattern blog of mine was just a very simplified example of how a functional call can be translated to a Java Bean model. Actually it was to show how the paradigm of a model works. I personally use a similar approach to yours. It seldomly makes sense to map an EJB method one to one to a model, but the javabean model must be driven by the Userinterface and represents a bridge between the business service layer and the ui. I personally even think that often it does not make sense to map RFC function like they are to the Web Dynpro Context. Most often you end up writing ZBAPIs that return structures like they are used in the UI. But if you use a java bean model as a layer in between your service layer, you are more flexible in evolving the application. Anyways design patterns for the java bean model need to be discussed more on SDN as they really add very valuable possibilities you would never have when working with value nodes alone. With the Javabean model we are back in the real OO world where things like inheritance work, things that are really not too well supported by the native WD features. I encapsulate every context of mine as javabeans. This has nothing to do with EJBs (which I am personally not a fan of) but only with the fact that I want to work with the power of the OO world.
    rgds
    David

  • How can I call a java class from within my program?

    I was wondering if there's a platform independent way to call a java class from my program.

    Here's my scenario. I'm working on a platform independent, feature rich, object-oriented command prompt program. The way I'm designing it is that users can drop classes they write into my bin directory and gain access to the class through my program. For example, they drop a class named Network.class in the bin directory. They would type Network network at my command prompt and gain access to all the methods available in that class. They can then type system.echo network.ipaddress() at my prompt and get the system's ip address. I have it designed that there's a server running in the background and the clients connect to my port. Once connected the end-user can enter their user name and password and gain access to the system. When they type a command they actually call another java program which connects to my server using a seperate thread. They can then communicate back and forth. I have it set that everything has a process id and it's used to keep track of who called what program. Once the program is done it disconnects and closes. Rather than getting into the nitty gritty (I didn't want to get into heavy detail, I know how everything will work) I'm really interested in finding out how I can call a java program from my program. I don't want it to be part of the app in any way.

  • ? IDEAS FOR A CONTOLLER CARD FOR ATA 400 GB HD??

    Hi Chaps,
    The Nitty Gritty:
    Seeking an inexpensive PCI controller card to unlock the full size & speed of my newly installed 400 GB Seagate Ultra ATA HD, which is currently limited to the 137 MB cap of my oldschool Quicksilver G4.
    Looking for widely available card that I could possibly find in-store. I have perused the discussion boards at length but feeling more confused.
    Budget is key, I have heard they are available as little as $40. I do not need any RAID capability, just fast and reliable inexpensive product.
    System:
    G4 Quicksilver AGP
    450 Mhz
    768 MB SDRAM
    CD ROM DRIVE (soon upgrading to Pioneer DVR-111D)
    OS 10.4
    Thanking you all in advance for any suggestions!!
    Best,
    -- Jesse
    G4 450 processor   Mac OS X (10.4)   768 MB SDRAM
    G4 dbl processor    

    Jesse, welcome, here is a link to the Acard PCI card, you might check your local supplier for any additional deals or rebates.
    Joe
    Power Mac G4 Gigabit Ethernet   Mac OS X (10.3.9)   1.5 GB Ram, ENCORE/ST G4, Tempo SATA, ATI Radeon 9000, Adaptec 4000

  • Out of memory error while deploying Oracle Service Bus Ressources

    Hi,
    When i am trying to deploy resources on sbconsole,it is showing below error on activation.
    Oct 15, 2011 3:11:38 PM CDT&gt; &lt;Error&gt; &lt;Deployer&gt; &lt;BEA-149265&gt; &lt;Failure occurred
    in the execution of deployment request with ID '1318709496539' for task '14'. Error is:
    'java.lang.OutOfMemoryError'
    java.lang.OutOfMemoryError
    at java.util.zip.Inflater.init(Native Method)
    at java.util.zip.Inflater.&lt;init&gt;(Inflater.java:83)
    at java.util.zip.ZipFile.getInflater(ZipFile.java:278)
    at java.util.zip.ZipFile.getInputStream(ZipFile.java:224)
    at java.util.zip.ZipFile.getInputStream(ZipFile.java:192)
    # There is insufficient memory for the Java Runtime Environment to continue.
    # Native memory allocation (malloc) failed to allocate 124968 bytes for Chunk::new
    # An error report file with more information is saved as:
    I have min and max heap size 3072M and min and max perm size set to 512M. I tried number of ways to solve this issue .
    1. I restarted the weblogic and tried deploy again. every time i see the same error.
    2. This steps turn out into whole mess. I deleted plan files from $domain_dir/osb/config/plan and again restarted web logic. After deleting plan files also , i can see all my proxies in sbconsole. Can somebody explain what is plan files in plan directory and ear files in sbgen? How OSB uses this ?
    Thanks for your help
    -B

    plan.xml and ears are to deploy MDBs and customize them. When you deploy a Proxy Service who consumes JMS messages, OSB deploys a generic ejb.jar module and customizes it trhough a plan.xml (honestly I don't all all the nitty gritty)
    in your case, for the OOM problems I am using a JVM option HeapDumpOnOutOfMemory, it's priceless, it produces a heap dump on which you can run a complete analysis with yourkit.com or Eclipse MAT. PRICELESS! if I were you, I would run such a test and send to Oracle Support the heap dump and your findings.

  • HP LaserJet 1320 requires user intervention to print after Leopard

    Ever since upgrading to Leopard, almost every single time I send a print job to my HP LaserJet 1320 (over the network, it actually connected to a PowerMac G5 in my office), it triggers an error warning (orange flashing light) on the printer. Pressing the main green print button lets the job print, but this is really annoying.
    I checked and Apple recently posted a note about the HP LaserJet 1200 having a very similar problem except that an error page was printed after each job. The workaround they suggest for this doesn't work with the LaserJet 1320 problem, however.
    A quick Google reveals at least one other Leopard using HP LaserJet 1320 owner is seeing a similar issue. And actually, as I composed this post, I found some 1320 owners in another Apple Discussion post about HP printer issues with Leopard. They suggested using the generic PostScript driver, but then I really like to have the ability to duplex.
    Nothing in the /private/var/log/cups/error_log file of note as far as I can tell.
    Any ideas? Any other people out there experiencing this problem who might have a solution (other than switching to a generic postscript driver)?

    Ok, I think I've figured a way to make the laserjet 1320 stop requiring "user intervention" (e.g. pressing the green button for every printjob).
    Try the following:
    Open "System Preferences" -> Choose "Print & Fax" -> Choose the Laserjet 1320 printer -> Press button "Options and Supplies..." -> Select Driver "tab" -> In the drop-down-menu "Fit to page" choose "Nearest Size and Scale" instead of "Prompt User".
    The chosen driver should be "HP Laserjet 1320 series", because the generic postscript printerdriver dosen't have the duplex option amongst other shortcomings.
    I find it odd that the lowest memory-option is 16-31mb of ram. The default configuration of 16mb is not IN that range, but maybe that is a little too nitty-gritty.
    ~ Jørgen

Maybe you are looking for

  • How To Setup My ITunes Library

    Hi, I have ripped and stored some songs from my CD collections. But my iMac brock down and I still haven't  finished ripping the CDs. So aftering sending my iMac to repare, "how do I set up my (NEW) ITunes Library. So that I can continue rippng the s

  • Monitor/Keyboard and Mouse Help?

    I got a mac and when I try to connect to my monitor (http://amzn.com/B00B17C5KO) (Connected using DPI-HDMI) It works and it connects but when I connect my mouse and keyboard (http://amzn.com/B00DKXXAAQ) it doesn't recognize them or something and the

  • Why won't a format Forward?

    When I receive mail with various font colors, I want to forward it to others, but Mail will not allow me to do this. It takes the text and puts it in default black. I have tried "Make Rich Text" to no avail, and have tried Preferences in Mail. Is eit

  • Siri and time zone error

    Hello. I live in Moscow (Russia) and I've bought iPad Mini here. When I ask Siri to create a meeting for example at 5 p.m. she creates it at 6 p.m. I think this problem connected with Moscow time zone changing from +3 to +4. I have checked and change

  • Issue with WITH CLAUSE

    Hi, I am trying to execute a similar query with With Clause. With x as (select sysdate from dual) select * from x; Its throwing error. My oracle version is 9.2.0.8.0. What could be the reason. I came to know that with clause is supported from 9i rele