Batch Loader and Folders_g Question

I would like to use Batch Loader to load a large number of images into WCC. I am also using Folders_g in my implementation, and would like to instruct batch loader during asset loading run-time which folder in /Contribution Folders/ to place the asset. Does anyone know what the batch load file key / value pair is to make this happen?
Would I use something like this: (using dCollectionPath)
Action = insert
dDocTitle=Test Image
dDocAuthor=sysadmin
dSecurityGroup=Public
dDocType=Document
dInDate=10/25/10 9:58 AM
primaryFile=C:/Files/Test Image.doc
dCollectionPath=/Contribution Folders/images
<<EOD>>
Thank you,
Randy

Good Morning,
Thank you Anand and Srinath for helpful answers. Yes, the xCollectionID does work and I have it loading into the correct folder using that approach. However, I have several folders under the root /Contribution Folders hierarchy, and determining the xCollectionID of each or these subfolders prior to Batch Loader runtime and mapping the IDs to each of the nodes in the Batch Load File will be tedious. Furthermore, I will be building the Batch Load File on at least four different environments i.e. DEV, TEST, STAGING, and PRODUCTION, and determining xCollectionIDs for several folders across these environments will add even more complexity. I can do it, but was hoping for an easier path.
I am open to any more suggestions / ideas following your digestion of the previous information.
Let me throw one other question out there. Let's say I am able to load all my assets into the several folders in my folders_g hierarchy in my DEV environment using the mapping of xCollectionID. Am I able to migrate that folder structure with loaded assets downstream into TEST, STAGING, and PRODUCTION using some out of the box UCM utility? Does something like that exist?
Thank you,
Randy

Similar Messages

  • About Batch Loader and Multiload Journal files

    Hi, may someone help me in 2 questions/
    Can I use Batch Loader to automate process FDM journals and load it to HFM?
    And second, is it possible to use multiload excel files for these journals?
    Thx
    Edited by: user10941958 on Jan 22, 2010 2:54 AM
    Edited by: user10941958 on Jan 22, 2010 2:56 AM

    You can use the Trial balance file for loading journals. Just select the location and set data value to entity currecy adjustments. The data will go as journal entry even if you use trial balance file but you wont have 2 columns debit and credit only one colum for amount.
    Hope this is helpful.
    If you find any post helpful or correct please mark it.
    Nick

  • Batch loader and EBS Managed attachments

    Hi All,
    I have customer with managed attachment on EBS. I'd to run batch loader to import historical documents to the env as private documents for security group AFDocuments.
    My question is: How I make these documents to be visible due to EBS business entity?
    Thanks,
    Nir

    Hi,
    I saw that there is table named AFObjects ( documentation ).
    But I need to know if there ara any way to check-in document Via WCC for security group AFDocuments and supply for one of the doc MD the business object entity of EBS and the connection to the EBS will behave as I add new doc trow EBS managed attachments or attache to EBS entity new doc without involve any DB procedure.
    Nir
    Edited by: Nir S on Jul 19, 2012 2:03 AM

  • Problem with Batch Loader and FDM user regional settings

    Hi
    I am using FDM 11.1.2.1.
    When I use the English regional settings for my FDM user account, I can do Export Excel from FDM, but Batch Loader doesn't work (Error : "one or more parallel processes failed to start).
    When I use the French regional settings for my FDM user account, Export to Excel doesn't work anymore (Error: Conversion from string "12.0" to type 'Long' is not valid. Detail: InnerException1: Input string was not in a correct format.), but the Batch loader is ok.
    Any idea?
    Thanks in advance for your help
    Fanny

    Hi Mittal,
    By default, formats for date is determined by the report server language at run time. The report server language is the language of the operating system on which the report server is installed. So the formats for date is determined by the language of the
    operating system on which the report server is installed.
    When you deploy the report to SharePoint site that the regional setting is set to English (Australia), please also set the report language to ‘en-AU’. Then the date format can be also changes to English (Australia).
    If the issue is still exited, could you please tell us the date format of date parameters and the field? If possible, please post the screenshot about this, then we can make further analysis.
    Reference:
    Solution Design Considerations for Multi-Lingual or Global Deployments (Reporting Services)
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Loading file in Merge Mode through the batch loader in FDM

    Hi,
    I have multiple locations loading in replace mode and 1 location loading in merge. We are using the batch loader and following the naming convention specified in the admin guide. Is there a way to specify in the location name to load the file to HFM in merge mode, or does that location need a seperate adapter with the load rule set to merge?
    Any assistance would be greatly appreciated

    You dont need a spearate adaptor. You can achieve this by changing the naming of the files. I believe u currently use RR option for replacing the data. (seq_<Location>_<Category>_<Peiod>_RR.txt). You can use A or R appropriately in your load method.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • The wonderful world of Batch Loading - what's going on?

    Hi,
    I'm having an issue with the batch loader. When I load a file using batch loader and the the same file manually executing the workflow I get two different datasets at the import stage. I have a single import script and it seems that it doesn't execute properley when the file is loaded via the batch loader but does when I perform a manual import. I can't think of any logical reason for this. The script is getting executed but only partially when the batch load is run. I make use of a FDM temporary variable in the script RES.PvarTemp1, and this is the only thing I can think of that may be failing. Do these work when the batch load is run? Has anyone else experienced similar problems with the batch loader and import script execution?
    Any ideas are very welcome :-)
    FDM version 11.1.1.3 by the way
    Edited by: SH on Mar 5, 2012 10:54 AM

    Is the iPhoto icon showing crossed out?  Like this?
    When you shot down your Mac, it may have installed a system update, that had been downloaded automatically.
    Which version of MacOS X are you currently running?  The newest MacOS X version 10.10.2 requires iPhoto 9.6.
    Try to download and install the newest iPhoto version from the App Store "Featured" page.

  • ALBPM - Batch events and Load Balancing

    Hi,
    We are planning to design a BPM solution for one of our current applications. The need we have is that the BPM solution should be able to start process instances in a batch mode. We receive about 25,000 to 30,000 events in a batch file and we need to start one BPM process instance for each record. Currently we are evaluating the ALBPM, but trying to figure out a best approach to do this. Also I would like to know what will be be ther better options to configure load balance instances of BPM for this scenario. I could not find any documents from BEA to address these. Anybody tried or came across this situation? Any documents/examples that can answer or suggest options for the batch modes? Thanks in advance!

    First of all, load balancing applies at various levels.
    1) load balancing at the horizontal level is only achieved by creating a WLS cluster and deploying the BPM engine in the cluster.
    2) load balancing at the vertical level can be achieved by creating many BPM engines and deploying processes on each Engine, this will distribute instances between the engines.
    if you need to create 25k instances of the same process in a batch, there is no problem, it will queue the executions and dispatch them to the available threads.
    A good approach would be to create instances in smaller chunks, to do this you can create a "batch process" that can get the file, split it in smaller chunks and process one chunk at a time. An important thing to consider when working with so many instances is that the transactions tend to get bigger and longer, so if you have a huge chunk, you can get a transaction timeout or a DB exception because of redo logs sizes.
    Hope this helps!
    MAriano Benitez
    Join us at BEAParticipate, May 6-9 2007 | Atlanta, Georgia

  • Sql Loader and a batch id

    Hello
    I have a loading table which as a primary key. We insert into load_ctl table which has a load_ctl_id and then sql load a csv file into a table called load_table.
    Now when we have to process the load_table we work with the primary key from the control table load_ctl_id to know which load to process. How can I get the load_ctl_id into my load_table. The csv file does not contain it.

    What full version of Oracle?
    How do you currently generate the control table load_ctl_id?
    Do you have to be concerned with concurrent load jobs?
    What tool are you using to perform the load (sqlldr)?
    If you already have a way of gerating the load_ctl_id and placing it into the control table and you do not need to worry about concurrent jobs you could use a before insert trigger to insert the same maximum load_ctl_id (assuming sequence or date stamp) into the load table with each row insert. Or leave the column null during the load and then immediately after the load update each column.
    If you have to worry about concurrent load processes where each would be a different batch number then how you currently create the load_ctl_id value is more important to the solution since you have to make sure two concurrently running sessions would in fact grap two different batch ids.
    HTH -- Mark D Powell --

  • 10gR3 -- Validate Standard and batch loader

    Team,
    We have created a component using ValidateStandard filter to create content id according to our bussiness logic. When we import content using batch loader looks like validateStandard is not working. Is it not possible to use validateStandard while using Batchloader? Let me know your views.
    regards,
    deepak

    You could always use idccommand to load batches (then you can choose your own service, like a normal checkin service that doesn't skip over a few steps so it picks up on your filter).
    It is described pretty well in this blog: http://blogs.oracle.com/kyle/2010/10/ucm_batch_load_content_workflow.html
    Good luck

  • Increse No of BGP while data load and how to bypass the DTPin Process Chain

    Hello  All,
    We want to improve the performance of the loads. Currently we are loading the data from external Data Base though DB link. Just to mention we are on BI 7 system.  We are by passing the PSA to load the data quickest. Unfortunately we cannot use PSA.  Because loads times are more when we use PSA. So we are directly accessing views on external data base.  Also external data base is indexed as per our requirement.
    Currently our DTP is set to run on the 10 parallel processes (on DTP settings for batch Batch Manager with job class A). Even though we set to 10 we can see loads are running on 3 or 4 Back ground parallel processes only. Not sure why. Does any one know why it is behaving like that and how to increase them?
    If I want to split the load into three. (Diff DTPs with Different selections).  And all three will load the data into same info provider parallel. We have the routine in the selection that will look a table to get the respective selection conditions and all three DTPs will kick off parallel as part of the process chain.
    But in some cases we only get the data for two or oneDTPs(depends on the selection conditions). In this case is there any way in routine or process chain to say that if there is no selection for that DTP then ignore that DTP or set to success for that DTP and process chain should continue.
    Really appreciate your help.

    Hi
    Sounds like a nice problemu2026
    Here is a response to your questions:
    Before I start, I just want to mention that I do not understand how you are bypassing the PSA if you are using a DTP? Be that as it may, I will respond regardless.
    When looking at performance, you need to identify where your problem is.
    First, execute your view directly on the database. Ask the DBA if you do not have access. If possible perform a database explain on the view (this can also be done from within SAPu2026I think). This step is required to ensure that the view is not the cause of your performance problem. If it is, we need to implement steps to resolve that.
    If the view performs well, consider the following SAP BI ETL design changes:
    1. Are you loading deltas or full loads. When you have performance problems u2013 the first thing to consider is to make use of the delta queue (or changing the extraction to just send deltas to BI)
    2. Drop indexes before load and re-create them after the load 
    3. Make use of the BI 7.0 write optimized DSO. This allows for much faster loads.
    4. Check if you do ABAP lookups during the load. If you do, consider loading the DSO that you are selecting on in memory and change the lookup to refer to the table in memory rather. This will save tremendous time in terms of DB I/O
    5. This will have cost implications but the BI Accelerator will allow for much faster loads
    Good luck!

  • OPA 10.4 - batch processor and Entity

    Hello all.
    I have a problem with the use of Entity with batch processor.
    The fact is we want to use OPA with csv input named like "TARIF_ONE.csv". One different csv for every way of tarification and each time a different project that will be call by a batch process (the client need)... without the use of a configuration file !
    So, if I well understood the OPA batching process, I need, for a "TARIF_ONE" project to be feed with a "TARIF_ONE.csv" file, to have a "TARIF_ONE" entity that will get all the data from the csv input file.
    I've done so.
    But my problem is that when I call the OPA zip by batch, the TARIF_ONE entity are well feed by the data from the csv file... but the calculation from the excel file are not made at all... and in a exporttsc file, the Entities (TARIF_ONE) are set as "unknown" for all the cases inputed...
    I've setted my project like this :
    my Properties file is setted like that :
    Global
    - TARIF_ONE (entity)
    --- INPUT_DATA_ONE (attribute) : Text = "Input data one"
    --- INPUT_DATA_TWO (attribute) : Text = "Input data two"
    --- OUTPUT_DATA_ONE (attribute) : Text = "Output data one"
    --- OUTPUT_DATA_TWO (attribute) : Text = "Output data two"
    And my excel rules file is setted like below
    - Declaration Sheet
    -- INPUT_DATA_ONE : Text = "Input data one from TARIF_ONE"
    -- INPUT_DATA_TWO : Text = "Input data two from TARIF_ONE"
    -- OUTPUT_DATA_ONE : Text = "Output data one from TARIF_ONE"
    -- OUTPUT_DATA_TWO : Text = "Output data two from TARIF_ONE"
    - Calculation Sheet
    -- OUTPUT_DATA_ONE = INPUT_DATA_ONE * INPUT_DATA_TWO
    As I said, when using the batch process, the inputs attribute for the TARIF_ONE entity are well feed, but the OUTPUT attribute are not (still unknown)... and the Entity itself is said as "Unknown"... any way of reflexion to soluce that ?
    Thank-you,
    Philippe,

    Philippe Carpentier wrote:
    Thank you Franck for your answer.
    I would have just one more question : is it possible to use different entity in the zero configuration mode ?
    I tried with one data file by entity, and the entities ares not created
    I tried with all the datas for entities in the global.csv file, and the same.
    It seams to me that when using differents entities in a projetc for batch, we have to use a configuration file... do you confirm my feeling ?
    Philippe,
    Edited by: Philippe Carpentier on 25 mai 2012 03:12You can definitely load entities other than global using zero configuratiuon. To do this create a csv file that has the same name as the public name of your entity. For example if you have an entity with the name "person" then create a person.csv file for each entity instance.
    You will need to have a reference to which global instance each entity instance relates to. The way to do this is to have a column named for the containment relationship for the entity to hold the foreign key to the entity instances parent.
    Example
    We have a global.csv with 3 globals (3 cases)
    #, (number_of_persons)
    1,
    2,
    3,And then a person.csv with 7 people which must all be link to their owning (parent) global. This is done through the containment relationship with the public name "all_persons"
    #,person_name,person_age,all_persons
    1,Fred,      ,27,       ,1
    2,Barney,    ,28,       ,1
    3,Homer,     ,40,       ,2
    4,Marge,     ,38,       ,2
    5,Bart,      ,10,       ,3
    6,Maggie,    ,1,        ,3
    7,Lisa,      ,8,        ,3Each value in the column "all_persons" is a foreign key to the global value which contains that person
    For more information on this, see "CSV input for the Batch Processor", and "Zero-configuration conventions for CSV input" in the "Oracle Policy Automation Developer's Guide". http://docs.oracle.com/html/E29403_01/toc.htm

  • Load and Unload Alias Table - Aggregate Storage

    Hi everyone,<BR><BR>Here I am again with another question about aggregate storage...<BR><BR>There is no "load" or "unload" alias table listed as a parameter for "alter database" in the syntax guidelines for aggregate storage (see <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/techdocs/eas/eas_712/easdocs/techref/maxl/ddl/aso/altdb_as.htm">http://dev.hyperion.com/techdo...l/ddl/aso/altdb_as.htm</a> )<BR><BR><BR>Is this not a valid parameter for aggregate storage? If not, how do you load and unload alias tables if you're running a batch script in MaxL and you need the alias table update to be automated?<BR><BR>Thanks in advance for your help.<BR><BR>

    Hi anaguiu2, <BR><BR>I have the same problem now. Do you find a solution about the load and unload alias table - Aggregate storage ? Could you give me your solution used if you have one. <BR><BR>Thanks, Manon

  • Account with an icon of a face and a question mark

    Same issue of other user in Yosemite Apple Support.
    Following advises on that thread I also installed the ETRECHECK software tool, report is as follows:
    Problem description:
    At the login screen I find an icon with a face and a question mark in it - with a message it needs an update.
    EtreCheck version: 2.1.5 (108)
    Report generated 02 gennaio 2015 12:37:26 CET
    Click the [Support] links for help with non-Apple products.
    Click the [Details] links for more information about that line.
    Click the [Adware] links for help removing adware.
    Hardware Information: ℹ️
      MacBook Pro (13-inch, Mid 2012) (Verified)
      MacBook Pro - model: MacBookPro9,2
      1 2.5 GHz Intel Core i5 CPU: 2-core
      16 GB RAM Upgradeable
      BANK 0/DIMM0
      8 GB DDR3 1600 MHz ok
      BANK 1/DIMM0
      8 GB DDR3 1600 MHz ok
      Bluetooth: Good - Handoff/Airdrop2 supported
      Wireless:  en1: 802.11 a/b/g/n
    Video Information: ℹ️
      Intel HD Graphics 4000
      Color LCD 1280 x 800
    System Software: ℹ️
      OS X 10.10.1 (14B25) - Uptime: 1:22:54
    Disk Information: ℹ️
      APPLE HDD HTS545050A7E362 disk0 : (500,11 GB)
      EFI (disk0s1) <not mounted> : 210 MB
      Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
      Macintosh HD (disk1) / : 498.89 GB (467.03 GB free)
      Encrypted AES-XTS Unlocked
      Core Storage: disk0s2 499.25 GB Online
      MATSHITADVD-R   UJ-8A8 
    USB Information: ℹ️
      Apple Inc. FaceTime HD Camera (Built-in)
      Apple Computer, Inc. IR Receiver
      Apple Inc. BRCM20702 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple Inc. Apple Internal Keyboard / Trackpad
    Thunderbolt Information: ℹ️
      Apple Inc. thunderbolt_bus
    Gatekeeper: ℹ️
      Mac App Store and identified developers
    Launch Daemons: ℹ️
      [loaded] com.adobe.fpsaud.plist [Support]
    User Login Items: ℹ️
      iTunesHelper Applicazione (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
      Dropbox ApplicazioneHidden (/Applications/Dropbox.app)
    Internet Plug-ins: ℹ️
      FlashPlayer-10.6: Version: 16.0.0.235 - SDK 10.6 [Support]
      Flash Player: Version: 16.0.0.235 - SDK 10.6 [Support]
      QuickTime Plugin: Version: 7.7.3
      Default Browser: Version: 600 - SDK 10.10
    Safari Extensions: ℹ️
      Pin It Button [Installed]
      Save to Pocket [Installed]
      Add To Amazon Wish List [Installed]
    3rd Party Preference Panes: ℹ️
      Flash Player  [Support]
    Time Machine: ℹ️
      Time Machine not configured!
    Top Processes by CPU: ℹ️
          14% WindowServer
          3% hidd
          2% Safari
          1% Dock
          0% fontd
    Top Processes by Memory: ℹ️
      333 MB com.apple.WebKit.WebContent
      155 MB mds_stores
      137 MB Safari
      137 MB Finder
      86 MB Dropbox
    Virtual Memory Information: ℹ️
      7.76 GB Free RAM
      4.88 GB Active RAM
      3.28 GB Inactive RAM
      1.26 GB Wired RAM
      4.73 GB Page-ins
      0 B Page-outs
    Diagnostics Information: ℹ️
      Jan 2, 2015, 11:15:06 AM Self test - passed
      Jan 2, 2015, 12:06:57 AM /Library/Logs/DiagnosticReports/Dropbox109_2015-01-02-000657_[redacted].cpu_res ource.diag [Details]
    ---------- is there any troubleshooting for delete that fake account every time I start my Macbook Pro?
    thanks and regards
    Edoardo

    Smiley face with a ? means a bootable system is not found.
    There maybe  a problem with either system software or hard drive itself.
    Try this.
    Repair Disk
    Steps 2 through 8
    http://support.apple.com/kb/PH5836
    Best.

  • Batch to Batch Transfer and Tracking

    Question,
    Since Batch to batch material transfer is possible using SAP standard functionality.
    On my floor, the raw material i am consuming is batch managed and causing accounts error and negative stock due to wrong batch no. posting at the time of Co11.
    due to requirement of our customers, our plant must run with proper tracking of each batch, means finish goods to raw material (specific batch) relation should be there in system.
    If I maintain back flush function for BOM components in such a way that I always consume raw material from one batch (xxx) from production S-LOG. what i will do, I will refill that particular batch -xxx by transfer material from batch zzz (by batch to batch transfer) when needed to back flush.
    Back flush will happen always from one batch which will reduce accounts error and negative stock generation.
    Question is, will i be able to track and can built relation of raw material batch zzz and product FG?
    if yes how? if no then what could be the solution for this problem.
    Mr.Daar

    Hi,
    I think you're scenario is going little bit complex by doing transfer postings batch to batch this is very complex you know each time you have to do suppose if you have huge raw materials it leads to complexity any have for this point I can say is
    while transferring the batches from XXX to ZZZ you give reference production order number in the material slip or document header text so that you can track the batches to which production order you have transferred.
    in MSEG table you can trace the consumption of raw materials based on the Production order
    and with 309 movement type you can trace the batch transfers I thing you have to develop a report for this.
    Regards,
    Ravi

  • Profiling the loading and unloading of modules

    Modules appear to be the ideal solution for building complex rich internet applications but first hand experience has also shown me that they can leak memory like nothing else. I've read anything and everything I could find about modules and module loading/unloading including of Alex Harui's blog post "What We Know About Unloading Modules" that reveals a number of potential leak causes that should be considered.
    I've now created a simple soak test that repeatedly loads and unloads a specified module to help identify memory leaks using the profiler. However, even with the most basic of modules, I find that memory usage will steadily grow. What I'd like to know is what memory stuff is unavoidable flex overhead associated with the loading of modules and what memory stuff am I guilty for, for not cleaning up object references? I'd like to be able to establish some baseline values to which I will be able to compare future modules against.
    I've been following the approach suggested in the Adobe Flash Builder 4 Reference page "Identifying problem areas"
    "One approach to identifying a memory leak is to first find a discrete set of steps that you can do over and over again with your application, where memory usage continues to grow. It is important to do that set of steps at least once in your application before taking the initial memory snapshot so that any cached objects or other instances are included in that snapshot."
    Obviously my set of discrete steps is the loading and unloading of a module. I load and unload the module once before taking a memory snapshot. Then I run my test that loads and unloads the module a large number of times and then take another snapshot.
    After running my test on a very basic module for 200 cycles I make the following observations in the profiler:
    Live Objects:
    Class
    Package (Filtered)
    Cumulative Instances
    Instances
    Cumulative Memory
    Memory
    _basicModule_mx_core_FlexModuleFactory
    201 (1.77%)
    201 (85.17%)
    111756 (24.35%)
    111756 (95.35%)
    What ever that _basicModule_mx_core_FlexModuleFactory class is, it's 201 instances end up accounting for over 95% of the memory in "Live Objects".
    Loitering Objects:
    Class
    Package
    Instances
    Memory
    Class
    600 (9.08%)
    2743074 (85.23%)
    _basicModule_mx_core_FlexModuleFactory
    200 (3.03%)
    111200 (3.45%)
    However this data suggests that the _basicModule_mx_core_FlexModuleFactory class is the least of my worries, only accounting for 3.45% of the total memory in "Loitering Objects". Compare that to the Class class with it's 600 instances consuming over 85% of the memory. Exploring the Class class deeper appears to show them all to be the [newclass] internal player actions.
    Allocation Trace:
    Method
    Package (Filtered)
    Cumulative Instances
    Self Instances
    Cumulative Memory
    Self Memory
    [newclass]
    1200 (1.39%)
    1200 (14.82%)
    2762274 (13.64%)
    2762274 (62.76%)
    This appears to confirm the observations from the "Loitering Objects" table, but do I have any influence over the internal player actions?
    So this brings me back to my original question:
    What memory stuff is unavoidable flex overhead associated with the loading of modules and what memory stuff am I guilty for, for not cleaning up object references? If these are the results for such a basic module, what can I really expect for a much more complex module? How can I make better sense of the profile data?
    This is my basic module soak tester (sorry about the code dump but there's not that much code really):
    basicModule.mxml
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Module xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               xmlns:mx="library://ns.adobe.com/flex/halo"
               layout="absolute" width="400" height="300"
               backgroundColor="#0096FF" backgroundAlpha="0.2">
         <s:Label x="165" y="135" text="basicModule" fontSize="20" fontWeight="bold"/>
    </mx:Module>
    moduleSoakTester.mxml
    <?xml version="1.0" encoding="utf-8"?>
    <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
                   xmlns:s="library://ns.adobe.com/flex/spark"
                   xmlns:mx="library://ns.adobe.com/flex/halo"
                   width="400" height="300" backgroundColor="#D4D4D4"
                   initialize="application_initializeHandler(event)">
         <fx:Script>
              <![CDATA[
                   import mx.events.FlexEvent;
                   import mx.events.ModuleEvent;
                   [Bindable]
                   public var loadCount:int = -1;
                   [Bindable]
                   public var unloadCount:int = -1;
                   public var maxCycles:int = 200;
                   public var loadTimer:Timer = new Timer(500, 1);
                   public var unloadTimer:Timer = new Timer(500, 1);
                   protected function application_initializeHandler(event:FlexEvent):void
                        loadTimer.addEventListener(TimerEvent.TIMER_COMPLETE, loadTimer_timerCompleteHandler);
                        unloadTimer.addEventListener(TimerEvent.TIMER_COMPLETE, unloadTimer_timerCompleteHandler);
                   protected function loadModule():void
                        if(loadCount < maxCycles)
                             moduleLoader.url = [correctPath] + "/basicModule.swf";
                             moduleLoader.loadModule();
                             loadCount++;
                   protected function unloadModule():void
                        moduleLoader.unloadModule();
                        unloadCount++;
                   protected function load_clickHandler(event:MouseEvent):void
                        load.enabled = false;
                        loadModule();
                        unload.enabled = true;
                   protected function unload_clickHandler(event:MouseEvent):void
                        unload.enabled = false;
                        unloadModule();
                        run.enabled = true;
                   protected function run_clickHandler(event:MouseEvent):void
                        run.enabled = false;
                        moduleLoader.addEventListener(ModuleEvent.READY, moduleLoader_readyHandler);
                        moduleLoader.addEventListener(ModuleEvent.UNLOAD, moduleLoader_unloadHandler);
                        loadTimer.start();
                   protected function moduleLoader_readyHandler(event:ModuleEvent):void
                        unloadTimer.start();
                   protected function moduleLoader_unloadHandler(event:ModuleEvent):void
                        loadTimer.start();
                   protected function loadTimer_timerCompleteHandler(event:TimerEvent):void
                        loadModule();
                   protected function unloadTimer_timerCompleteHandler(event:TimerEvent):void
                        unloadModule();
              ]]>
         </fx:Script>
         <mx:ModuleLoader id="moduleLoader"/>
         <s:VGroup x="20" y="20">
              <s:HGroup>
                   <s:Button id="load" label="Load" click="load_clickHandler(event)" enabled="true"/>
                   <s:Button id="unload" label="Unload" click="unload_clickHandler(event)" enabled="false"/>
                   <s:Button id="run" label="Run" click="run_clickHandler(event)" enabled="false"/>
              </s:HGroup>
              <s:Label text="loaded: {loadCount.toString()}" fontSize="15"/>
              <s:Label text="unloaded: {unloadCount.toString()}" fontSize="15" x="484" y="472"/>
         </s:VGroup>
    </s:Application>
    Cheers,
    -Damon

    Easiest way I've found to get your SDK version from within Builder is to add this: <mx:Label text="{mx_internal::VERSION}" />
    http://blog.flexexamples.com/2008/10/29/determining-your-flex-sdk-version-number/
    Peter

Maybe you are looking for