Set Up/Conversion Opinions on g5 to 8 core

I'll rec my 8 core in a few hours and am looking for advice to how to convert my g5 computer/data.
It's tempting to use the fire wire/apple converts during set up - but...
1. Might this overwrite g5 aps over native apps.
2. will this convert my itunes data properly
has anyone mapped a 'best strategy' ???
also - what format should I use for extra drives for the new intel box?
IF I do the conversion by installing my g5 boot drive as a 2nd drive and drag the files over manually - Can I assume I get all the app prefs JUST buy copying over the USER library files ?
any thoughts are very much appreciated.
Fritz

I have to say, Phil's process is nearest to my own. Generally my process is…
0) Clone previous system to external drive (so I can be as destructive as I like with it without fear of repercussions - I often use my backup clone for this)
1) Boot off the Installation CD/DVD and zero the drive (I never use a drive until it's been zero'd at least the once)
2) Custom install Mac OS X and bundled software
3) Repair disk permissions
4) Run Software Update and update everything except Mac OS X (restart and repair permissions as required)
5) Manually download the combo version Mac OS X updater and install it, restart and repair permissions
6) Install software (repairing permissions as my fancy desires - normally after anything large)
Note : By this stage, and probably before I'll know if there's anything wrong with the system. So before my previous system is even touched I'll be acting accordingly before anything destructive happens. Additionally, by manually doing everything through this process I will get a better feel of the performance of the system.
7) Customise Mac OS X etc as required such as host file mods and web server configs
8) Manually transition my data from my previous system starting from mission critical items like mail, address books etc and then doing it from top to bottom leaving the Library folder mostly till last
Ultimately, if I don't have the time to do 0 through 8 I don't start the process. Sure it takes some time, generally around 12-18 hours depending in the drive zero time, although there's a lot that can be said about it… I've never had to mess around, debug, delete, start over or even wondered if my system was going as fast as it should be. Which, in my book at least, is worth the every second and given that that the Migration Assistant has been known to take in the region of hours, it's not much of a premium to pay for what you get from it.

Similar Messages

  • How to set the conversation id programmatically in a BPM process

    Hi all,
    I am using BPM/SOA 11g PS3.
    Is it possible to set the conversation id programmatically in a BPM process starting with a none start event?
    I know I can set it easily if I use a BPM process starting with a message start event.
    All I have to do is set it in the "wsa:MessageID" node in SOAP Header when I initiate the process instance.
    However, I have no idea how to set the conversation id programmatically in a BPM process starting with a none start event.
    I looked for if there is any appropriate method in the Java api for the process instance management, such as IInstanceManagementService and CompositeInstance, but no method seems to be appropriate.
    Does anyone know how to do this?
    Regards,
    Kenji
    Edited by: Kenji Imamura on 2011/04/20 0:10

    Hi fifty,
    Did you get a solution to the above problem you have mentioned? I have a similar issue i am trying to fix.
    I have a webservice call in a process activity and if the call does not work i get a soap fault and the fuego.lang.ComponentExecutionException . My process requires that i catch the exception infact any kind of exceptions that occur on that call and perform another activity in the process.
    I have defined an exception handler at the activity level for java.lang.Exception and java.lang.RunTimeException.
    i don't see anything in the catalog which would handle the SOAP fault OR the componentexception.

  • [AS][INDCC] How to set Color Conversion field to No Color Conversion when creating PDF Export preset

    How can i set Color Conversion field in Export to PDF dialog to No Color Conversion when creating PDF Export preset? i have done a bit of searching and have found where it has been recommended to set effective pdf destination profile to use no profile but it doesn't seem to be producting the expected results.

    Yes, it seems that i had to make the change after creation, not while creating the preset. thank you.
    tell application "Adobe InDesign CC"
         set newPreset to make new PDF export preset with properties ¬
              {name:"preset name", standards compliance:none, acrobat compatibility:acrobat 7}
         tell newPreset to set PDF color space to unchanged color space
    end tell

  • How to set expanded conversations on Mail ?

    Hi there,
    I need to know how to set expanded conversations on Mail inbox.
    Every time I turn off/on the iMac and open again the Mail, the visualization of mail conversations are unttached again, I mean, I have to manually set to expand all conversations again and again.
    How to fix that ?
    mail > sort by converstation > expand all conversations
    mail > preferences > visualization > ... include related conversations
    I did it all, but they conversations stay expanded only when Mail app is open. If I close it, I'll have to all all over again.
    Hope you can help.
    EBD

    googled! which i should have done prior.

  • How do I set the conversion settings for Word to ignore outline autonumbering for headings?

    I have a user manual in Word that uses outline autonumbering for Heading 1, Heading 2, etc. For example, "1.0 Introduction," "1.1 System Settings" and so on).
    I know when setting the conversion settings for FrameMaker, you can select the Ignore Autonumber option in the Conversion Settings dialog to keep only the paragraph content. I don't see anything like this for the Word Conversion Settings. Does this capability exist?

    To clarify:
    I have my paragraphs in Word set up with the required numbering properties using Word's Style settings. In Word, I want the headings to be numbered 1.0, 1.1, 1.1.1, etc. because the intent is for the reader to read through the manual like a book - from start to finish.
    I'm single-sourcing my manual from Word into RoboHelp to generate an online help package. However, I do not want my RoboHelp headings to be numbered because the user may or may not read each topic in order (in other words, they may only need to read what's in section 3.2, but won't get there by reading sections 1.0 through 3.1 first). I might also clarify that each heading is its own topic - so Section 3.2 would be a separate topic from 3.0 and 3.1.
    So for example, in Word, it reads like this:
         "1.0 Introduction
         [content goes here]
         1.1 System Requirements
         [content goes here]
         1.2 References
         [content goes here]"
    And in RoboHelp, I want it like this
         "Introduction
         [content goes here]"
         ---[end of topic]---
         "System Requirements
         [content goes here]"
         ---[end of topic]---
         "References
         [content goes here]"
         ---[end of topic]---
    Does that make sense?
    So how do I keep the numbering in my Word document, but set up the Conversion Settings in RoboHelp to ignore those numbers and only include the text following the autonumber? I have been able to do this when single-sourcing a FrameMaker book into RoboHelp.

  • Character sets and conversions

    Hi all,
    were facing a quite complex problem, for which I'am not even able to specify were it is going wrong or what needs configuring, partly for lack of experience and partly for combining different tecnical areas from which I'm only responible for some of them.
    So I'll sketch breefly the situation, and hopefully you might give me some guidelines or hints as to where to look at.
    The setup : web application (so clients access by use of browser) on Weblogic- Linux platform, Tuxedo on Iseries , and as far as I understand some DB internally to Iseries where data is stored.
    Data is entered in the DB by use of some data-entry application that comes with the iSeries.
    The problem: consulting data by use of the web-aplication , some characters dont show up correctly , e.g. @ in email addresses, e's with accents, ...
    For the chain being "browser <-> WL <-> Tuxedo <-> DB" , the problem might be different points. But from trace beeing activated , we could see that the response going out of tuxedo to WL is not correct...
    Any hint as to what to look for, what can configuration is important, would be welcome ...
    Some sub-questions:
    - I understand Tuxedo is always "installed" in English , with no other option. This means that f.e. logs are in English.
    But can/need to define some character set?
    - Between Tuxedo <-> DB you can use som conversion tables ?
    Any help would be apreciated , were quite lost ..

    Hi,
    Given that you are running Tuxedo on iSeries, I'm guessing you are running Tuxedo 6.5 as the port for the current Tuxedo release on iSeries hasn't been released yet. Tuxedo 6.5 does not directly support multi-byte character strings. The two common buffer formats for string data in Tuxedo are STRING which doesn't support multi-byte characters, or CARRAY which does support multi-byte characters as a CARRAY is essentially a blob. Do you know what buffer type the Tuxedo application is using to send data to WebLogic Server?
    In Tuxedo 9.0 and later, direct support for multi-byte strings was added in the form of the MBSTRING buffer type. This buffer type supports multi-byte strings with a variety of character sets and encodings.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Archiitect

  • Set up conversions tracking with Iad Workbench

    Hello,
    I am starting to set up Iad Workbench and trying to sort out the conversion tracking.
    I would like to integrate the tracking into Mixpanel, eventually MAT (mobile app tracking), Adjust and Google Analytics if not possible with Mixpanel.
    From other posts, it seems that just MAT is enabled for the tracking. Is this correct?
    Also, from a technical perspective, what is it needed to set the tracking in place? Any type of code I should pass to my tech team?
    Best,
    Carola

    No, not without help from the thief.

  • Indicator "delete planned order" is set during conversion

    Hello Guys,
        While during Partial conversion of production error thru MD04, system shows the error message as indicator "delete planned order" is set. Whats the cause for this error How come we solve this issue . It's very most urgent Please PP gurus let me know the solution. Thanks in advance.
    Thanks...

    Dear Raja,
    If I have understood your problem correctly,then i Guess in the partial conversion screen,directly you are able to set the delete
    indicator and save the data.But remember that for the same planned order quantity the system generates a production
    order ,along with the planned order number linked to it ,when you do this deletion activity and save.
    You can check in MD04 with the available production orders and in the assignment tab page you can check the planned order
    number also.
    Check and revert back.
    Regards
    Mangalraj.S

  • Result Set - XML conversion - Problem in FormattedDataSet

    Hi,
    I'm trying to convert the JDBC resultset into a XML file.
    I read somewhere that FormattedDataSet interface has many methods that are very useful in converting the resultset to XML.
    I downloaded the following files as mentioned in the website:
    1. fdsapi.jar
    2. jakarta-oro-2.0.8.jar
    3. JAMon.jar
    Also as mentioned in the website (http://www.fdsapi.com/), I placed these jar files in my classpath. My classpath looks like:
    D:\Programs>set classpath
    CLASSPATH=D:\XMLConv\Installations\FormattedDataSet\fdsapi.jar;
    D:\XMLConv\Installations\FormattedDataSet\jakarta-oro-2.0.8.jar;
    D:\XMLConv\Installations\FormattedDataSet\JAMon.jar
    But when I executed my program through Command line, I'm getting this error:
    a1.java:6: cannot find symbol
    symbol : class FormattedDataSet
    location: class a1
    FormattedDataSet rs = FormattedDataSet.createInstance();
    ^
    a1.java:6: cannot find symbol
    symbol : variable FormattedDataSet
    location: class a1
    FormattedDataSet rs = FormattedDataSet.createInstance();
    ^
    2 errors
    I tried running in Eclipse editor(IDE) where I included the above said jar files as External JAR files. But, even there, I'm getting the same error.
    Could somebody please let me know how to solve this error.
    Thanks

    Checking the fdsapi.jar file reveals that the FormattedDataSet class is part of the JAR. Have you included the import statement for the package in your source? For example:
    import com.fdsapi.*;

  • Set me straight on RAM requirements for 8 core

    I have been hearing you need 1gb ram for each core. Or at least that's what people are recommending. Isn't 8gb a little overkill?
    I'm going to do HD editing. I would think 4gb ram would be pretty good. Of course, I realize more is always better, but is 8gb really necessary?
    Feel free to chime in, but I suppose we will find out when the 8 core's are shipping and what Leapard/FCP will bring us in the next few months.

    "Phil, do me a favor... PLEASE... check your page ins/outs and try running your machine a little harder and you tell me if you see a difference when you do get your RAM... "
    Of course I'll see a difference. I EXPECT to see a difference, that's the point. But where you and I differ, is that you're expecting the upgrades to make your machine more stable. Mine is already stable - the only thing I'm expecting is increased performance. I'm sure page outs won't be completely eliminated, but they'll be much, much more reasonable that what they are now. I also use a G5 Quad with 8 gigs of RAM and rarely are the page outs "zero". However, even with them about 5-20%, the Quad doesn't skip a beat.
    "Mr. Shockley only has 2 GB of memory, but I bet he can go to dinner and come back by the time his workflow has completed."
    Well, not quite that long but sure - the limited RAM I now have is causing tasks to take longer than needed. (I've developed a serious hatred for the sounds of disk access!) But while they take longer, the tasks DO get completed. Keep in mind I use several apps in Rosetta, including Photoshop CS3.
    "Uh oh, I ordered my Ram before I knew it used different chips."
    It wouldn't hurt to call but I don't think I'd worry about it. The 8-core blurb probably just means the new part numbers reflect what Apple has tested. None of the third party RAM vendors/companies are showing a "specific" module for the 8-core. (Even before the 8-core machines, Apple would never publicly say third party RAM was "compatible".)
    Apple also says the same thing in the blurb about hard drives. I can't imagine one SATA hard drive is compatible while another is not.
    Their term, "qualified" means they've tested the part.
    -phil

  • Conversions between character sets when using exp and imp utilities

    I use EE8ISO8859P2 character set on my server,
    when exporting database with NLS_LANG not set
    then conversion should be done between
    EE8ISO8859P2 and US7ASCII charsets, so some
    characters not present in US7ASCII should not be
    successfully converted.
    But when I import such a dump, all characters not
    present in US7ASCII charset are imported to the database.
    I thought that some characters should be lost when
    doing such a conversions, can someone tell me why is it not so?

    Not exactly. If the import is done with the same DB character set, then no matter how it has been exported. Conversion (corruption) may happen if the destination DB has a different character set. See this example :
    [ora102 work db102]$ echo $NLS_LANG
    AMERICAN_AMERICA.WE8ISO8859P15
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:01 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> create table test(col1 varchar2(1));
    Table created.
    TEST@db102 SQL> insert into test values(chr(166));
    1 row created.
    TEST@db102 SQL> select * from test;
    C
    ¦
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.EE8ISO8859P2
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:55 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select col1, dump(col1) from test;
    C
    DUMP(COL1)
    ©
    Typ=1 Len=1: 166
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ echo $NLS_LANG
    AMERICAN_AMERICA.EE8ISO8859P2
    [ora102 work db102]$ exp test/test file=test.dmp tables=test
    Export: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:47 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P15 character set (possible charset conversion)
    About to export specified tables via Conventional Path ...
    . . exporting table                           TEST          1 rows exported
    Export terminated successfully without warnings.
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:56 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> drop table test purge;
    Table dropped.
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ imp test/test file=test.dmp
    Import: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:15 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
    import server uses WE8ISO8859P15 character set (possible charset conversion)
    . importing TEST's objects into TEST
    . importing TEST's objects into TEST
    . . importing table                         "TEST"          1 rows imported
    Import terminated successfully without warnings.
    [ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:34 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select col1, dump(col1) from test;
    C
    DUMP(COL1)
    ¦
    Typ=1 Len=1: 166
    TEST@db102 SQL>

  • Conversion error, from character set 4102 to character set 4103

    Hi,
    We've developed a JCO server(in Java) with an ABAP report the function provided by the JCO server.
    MetaData:
         static {
              repository = new Repository("SMSRepository");
              fmeta = new JCO.MetaData("ZSMSSEND");
              fmeta.addInfo("TO", JCO.TYPE_CHAR, 255,   0,  0, JCO.IMPORT_PARAMETER, null);
              fmeta.addInfo("CONTENT", JCO.TYPE_CHAR, 255,   0,  0, JCO.IMPORT_PARAMETER, null);
              fmeta.addInfo("RETN", JCO.TYPE_CHAR, 255,   0,  0, JCO.EXPORT_PARAMETER, null);
              repository.addFunctionInterfaceToCache(fmeta);     
    Server parameters:
           Properties prop = new Properties();
           prop.put("jco.server.gwhost","shaw2k07");
           prop.put("jco.server.gwserv","sapgw01");
           prop.put("jco.server.progid","JCOSERVER01");
           prop.put("jco.server.unicode","1");
           srv = new SMSServer(prop,repository);
    If we run JCO server in both my client machine(from developer studio) and in the WAS machine(stand alone Java program), everything is ok. In the Abap side, the SM59 unicode test return the destination is an unicode system, and the ABAP report call the function can run smoothly.
    But we package this JCO server to a web application and deploy to WAS, problem occured. The SM59 unicode test still say the destination is an unicode system. But the ABAP report runs with an ABAP DUMP:
    Conversion error between two character set
    RFC_CONVERSION_FIELD
    Conversion error "RETN" from character set 4102 to character set 4103
    A conversion error occurred during the execution of a Remote Function
    Call. This happened either when the data was received or when it was
    sent. The latter case can only occur if the data is sent from a Unicode
    system to a non-Unicode system.
    I read the jrfc.trc log, it shows it receives data in unicode 4103(that's ok), but send data in unicode 4102(that's the problem).4102 is UTF-16 Big Endian and 4103  UTF-16 Little Endian. Our system is windows on intel 32 aritechture, so based on Note 552464, it should be 4103.
    Why it sends data (Java JCO server send output parameter to ABAP) in 4102?????
    What's the problem??? Thank you very much!!
    Best Regards,
    Xiaoming Yang
    Message was edited by:
            Xiaoming Yang

    Hello Experts,
    Any replies on this?
    I am also getting a similar kind of error.
    Do you have any idea on this?
    Thanks and Best Regards,
    Suresh

  • Lync SDK- I want to set conversation in full screen mode when someone shares their camera

    I'm able to get the created conversation and the AVModality to share my camera, but i don't know what kind of notification i'm supposed to get or handler should i subscribe in order to set the conversation in full screen when someone shares their camera.
    Right now i'm subscribed to this event handlers:
    AVModality.StreamStateChanged += AV_StreamStateChanged;
    AVModality.ModalityStateChanged += AVModality_ModalityStateChanged;
    VideoChannel.StateChanged += VideoChannel_StateChanged;

    "Codecs For Dummies" sounds like a new project for my Rapidweaver folder...
    nope, sorry, I collected my "knowledge" over the years… I'm no technician, just a user, so my know-how is max. 35%… a good place to start is wiki, for sure… use the mentioned keywords above for a start - video, container, mov, avi, mpeg, divx…
    I'm pretty sure, you get lost...
    just to frighten you: have a look here, a small overview of some video-codecs…
    or, to frighten you more: honorable forum member F Shippey published this website…
    another problem:
    you remember Betamax vs VHS? that fight wasn't "best vs. better", but sheer marketing power… h264 is an excellent codec/compressor, we Euroepans have choosen for HiDefTV (in the future), but little supported in the PC world... because we "Maccies" have invented it...
    or, mentioning PCs:
    .wmv/Windows Media Video (besides... isn't Video a media??) comes in ... how many? ... 5? 9? 11? flavors... and for sure: not compatible within!
    the good point: Mac supports most-used codecs; and you can encode in the most-used codecs…
    last word: quality - never forget: video compression isn't techniqué, it's an art!
    ..... woosh, and gone......
    keep posting!

  • Is there a way to shorten iMovie video conversion ?

    I am starting with a "m2ts" video file.
    I purchased a video conversion program from the Mac App Store.   It is called “AnyVideo converter HD".  It is well reviewed.
    So, I dropped in my "m2ts" file.  I set the conversion to "Computer MOV 720p HD".  "AnyVideo converter HD" converted a 1 minute video in about 15 seconds to a "mov".   The converted file had excellent picture quality and sound, it played perfectly using QuickTime.
    When I dropped this 60 second clip into iMovie 06 (set for 720p) it's conversion process took 35 minutes ?!?!   The result was flawless!  However, a two-hour video conversion would not be complete until about the time Capt. Kirk joins Starfleet.
    How can I speed this up ?

    baltwo - thank you for the expert advice. 
    I did as you instructed and the situation is better but I don't believe it's solved. 
    I still get 3 versions of Photoshop when I right click on a PSD.  I also get Dashcode and Textwrangler - which are the oddest associations.  All together I get 25 apps responding to "Open With" on a PSD.  And this is an improvement.
    Do you think I would get even better results if I ran the same command again?
    Thanks again.
    JL

  • Query Designer : Quantity Conversion as per UOM

    Hi gurus,
    I am facing a typical situation here. I have Product Quantity data that I fetch from R3, This has diff UOM based on different product. I know the conversion factor. I have not included the conversion factor in the transformation while loading the data. Hence in the cube, the data is loaded without any conversion.
    Now, in my query, I want that the Quantity data be displayed always in say BOX (Whether at aggregate level, ex. Sales Office level or at a more granular level (product brand level)) it should always be in BOX. Can I make such a modification in my query so that the end result for quantity should always be in BOX.
    thanks n regards,
    Sree

    Hi Sree,
    You can set unit conversion for relevant key figures in the Query Designer.
    For that, you need to create a quantity conversion type first.
    For a BW7.x system, use transaction RSUOM to maintain quantity conversion type.
    It could be defined like this:
    Source unit: from data record
    Target unit: fixed: BOX
    Hope this helps.
    Regards,
    Patricia

Maybe you are looking for

  • Time Quota Generation

    Hi Experts, Scenario: For most of the employees, our client accrues 30 days of Annual Leave per year (2.5 Per month). However there are some exceptions where the client would like to store the yearly accrual entitlement in an infotype and then the sy

  • Automated batch doesn't work in photoshop & from Bridge

    Hi all, Automated batch from  Bridge  tools>photoshop>batch  Photoshop cs5 stops working. Automated batch also doesn't work in photoshop. Automated actions work in Photoshop. All other Photoshop process work  from Bridge. I use a PC with windows. Thi

  • HT3529 Using imessage to another country

    When we sent a message to another Iphone and showed the imessage in a blue ballon , Is it means that we are texting free??? Even if one user is in The USA and the other in Brazil??

  • (FRM-40735) WHEN-CUSTOM-ITEM-EVENT trigger raised unhandled exception

    Greetings The rest of the subject is ORA-06508. I am using Oracle Forms 10g I have three instances of the same app: DEV, TEST, and PROD. I can access both DEV and TEST with no problems. This includes being able to connect to any of the underlying dat

  • ActionScripts 3.0

    Need some help with Action Scripts 3. I have been learning AS3 over the last 2 months now but I am getting frustrated with Tutorials I want to learn. Does anybody know of any good websites that show you how to build Drop Down Menus with AS3 or know o