Question regarding tranisent analysis and piecewise linear voltage export data

howdy,
My work with mulitism focuses on the analysis of equivalent circuits. I approach this through a transient analysis over a custom voltage input which I run through the PWL and export the data through excel for analysis, matching up the time of the transient analysis with the voltage through the circuit. Allowing me to observe the I-V performance.
The problem is the the exported data is usually an array of hundred of entries. That may or may not align with the discrete values of my custom voltage input and allow for direct analysis between real world data.
Is there a way to lock the time step (significantly reduce the number of interations) over the transient analysis to allow me to always export the same voltages. I.e only the voltages specified in my custom piecewise power source.
The reason I am interested in doing this is to generate data which directly matches up with the specified custom voltage input and that matches up with my real world experimental data and would not require me to pick through hundreds of iterations at varied voltages and cherry pick the points that compare to my real life measurements for analysis.
many thanks. 

SPICE simulations doesn't have a fixed delta time, if the signal is changing rapidly the SPICE engine will slow down and take more samples and if the signal is constant it will automatically speed and take fewer samples.  You cannot control the SPICE simulation sampling.  One suggestion is to export an .lvm file instead of an Excel file because Multisim will convert the data to have a constant delta t which may be better for you.  You can open the .lvm file with Notepad to view the format.
Tien P.
National Instruments
Attachments:
LVM export.PNG ‏46 KB

Similar Messages

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Question regarding the "mcxquery" and "dscl -mcxread" commands:

    Question regarding the mcxquery and dscl -mcxread commands:
    Does anyone know why the mcxquery and the dscl . -mcxread commands don't show any info about MCX managed login items & printers? The System Profiler's "Managed Client" section does. Id like to see info regarding managed printers and managed login items using the mcx tools. I have Mac users running 10.5.2 with both login items and printers that are pushed out to them via MCX. The System Profiler app shows all of their policies, but the dscl . -mcxread and mcxquery tools dont. Why not?
    -D
    Message was edited by: Daniel Stranathan
    null

    How do you "call procedures/functions" without sql code? You need at least the call statement like
    {call myProc(?,?,?)}that you wrap into a CallableStatement.
    Other than that: when you switch off autocommit, you need to call commit/rollback at the end. Usually, if you don't commit/rollback a non-autocommitted connection, the transaction get's committed/rollbacked when you close the connection - that depends on the JDBC driver. But it's never a good idea to ommit the commit/rollback calls on a non-autocommit connection. Usually you enclose your code in a try/catch block like this:
    con.setAutocommit(false);
    try {
       con.commit();
    } catch (Exception e) {
       con.rollback();
    } finally {
        con.setAutocommit(true); //or:
        con.close();
    }

  • Few questions regarding Oracle Scorecard and strategy management.

    Hi,
    I have following questions regarding Oracle Scorecard and strategy management:
    1. What are the ways in which i can show status of a KPI, like we have colors and symbols, what are others?
    2. can we keep log of KPIs, store them, keep report of feedback/action taken on that?
    3. Does Scorecard and strategy management have ability to retain history on feedback and log of
    entries i.e. date/time, user name?
    4. Does Scorecard and strategy management have ability to use common mathematical formulas e.g. median, average, percentiles. Describe.?
    Thanks in advance for your help.

    bump.

  • I got a question regarding running iOs and Windows using virtual software.

    Greetings!
    I got a question regarding running iOs and Windows using virtual software. I recently bought a monitor so I can display Windows on it and run both OS at the same time.Now,I'm using BootCamp. I downloaded VirtualBox for "tranferring" Windows on it. Since I'm a new iOs user , what do I need to do in order to make work?
    Do I need to un-install Windows from BootCamp,install VirtualBox and then install them again?
    Any information would be appreciate it!

    That should work to use OS X, you can run MS Windows using Virtual Box and use a seperate display for the Virtual Box window. That way you can run OS X and MS Windows simultaneously. However remember Virtual Box is freeware and not a commercial application like Parallels or VMWare Fusion and may not have the features of a commercial application. Support for Windows run in any virtualization application (Virtual Box, Parallels or Fusion) is not generally done on this forum as they are not OS X related. To get help on those apps you usually will need to go to their forums.
    Remember IOS will not run on either OS X or MS Windows, it only works on IOS devices.
    Good luck with your installation.

  • MOVED: question regarding CPU temps and latest bios (1.8)

    This topic has been moved to AMD64 nVidia Based board.
    question regarding CPU temps and latest bios (1.8)

    I'd believe the first bios's temps more than the second...  
    However what temps are given when using speed fan and/or everest? (in windows)

  • Questions regarding xft fonts, and awesome wm

    Hi, I just tried ArchLinux yesterday and loving it!
    I use JWM, but am very interested in dwm/awesome wm. I do have some questions:
    1. On a thread in the forum, it is mentioned that dwm has no Xft support. How useful exactly is Xft? Would it impact on design work involving GIMP/Inkscape/Scribus?
    2. Does awesome have Xft support?
    3. I installed awesome, but I have no idea on what keys to use. The shortcuts are different from dwm. I tried to look for quick tutorial just to get the basics (dwm has one on their website), but seems like all documentation I found have none of that section and go straight to configuration.
    Do I actually have to configure the wm before I can use it at a very basic level?
    4. I would like to do some more comparison between wms, trying to find a good balance for me in regards of lightness and functions I need. However, I have no idea on what to use to keep track of cpu/memory usage for all the wms. What's a good way to do this?
    Thank you in advance.

    Welcome to Arch!
    1. The lack of xft support in dwm mostly just means that you won't get anti-aliased fonts in the status bar. Individual applications will still have the same font support they had before you used dwm.
    2. Awesome uses pango for font rendering, which is even better than xft.
    3. Look at the man page for default key bindings. It's more or less usable out of the box, but you will definitely want to customize it somehow.
    4. A good system monitor is htop. Install it from the repos and run it from a terminal.
    Last edited by fflarex (2009-02-23 18:44:45)

  • Urgent very urgent help regarding sales analysis and purchase analysis.

    Respective all experts,
    when i see a sales analysis and purchase analisys for a particular month or year i see only amount before discount and tax and that is not a actual net amount that includes tax amount also i want the total gross amount with including tax amount and net amount in sales and purchase analysis. And how will i enter more fields in form settings menu of sales analysis and purchase analysis.
    Awaiting all expert advise,
    Regards
    saisakshi.

    Hi,
    For system reports like sales analysis and purchase analysis, they are hardcoded. There is no room for you to add anything if you cannot find it from the report.
    If you need to customize it, you need to create your own report.
    Thanks,
    Gordon

  • Questions regarding *dump_dest parameters and fast_recovery_area

    Hello,
    I just installed a fresh new 11.2.0.2 Database on Solaris 10.
    Everything straightforward on the parameter side!!! I tried custom install as well as general purpose template. When installing with DBCA, I set every parameters around DB Name in lowercase name.
    With this, questions are popping in my mind regarding some parameters after installation.
    First, %dump_dest parameters contains in path, two times the db name (ocpdb in my case):
    background_dump_dest       /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    user_dump_dest                 /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    core_dump_dest                 /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdumpIs it normal to have ..../rdbms/dbname/dbname/..... as path, with dbname/dbname ??? Why?
    Second, the question regarding the directory structure under fast_recovery_area (new term for flash_recovery_area). The directory structure:
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-10-28 19:53 ocpdb
    drwxr----- 5 oracle oinstall 512 2010-10-29 07:44 OCPDB
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l ocpdb
    total 9528
    -rw-r----- 1 oracle oinstall 9748480 2010-10-31 21:09 control02.ctl
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l OCPDB/
    total 3
    drwxr----- 5 oracle oinstall 512 2010-10-31 03:48 archivelog
    drwxr----- 3 oracle oinstall 512 2010-10-29 07:44 autobackup
    drwxr----- 3 oracle oinstall 512 2010-10-29 07:43 backupsetWhy am I having a subdir with dbname in uppercase AND in lowercase? Should I specify dbname in uppercase at db creation to have all files under the same directory, or in lowercase? Or, is it normal?
    I want to know how to do it well before reinstalling a fresh database.
    Thanks
    Bruno
    Edited by: blavoie on Oct 31, 2010 6:18 PM
    Edited by: blavoie on Oct 31, 2010 6:20 PM

    Hi,
    I just reinstalled all from scratch, everything in lowercase as well in environment variables and dbname in dbca:
    oracle@enalab13:~$ echo $ORACLE_SID
    ocpdbFast recovery area directories, dates prove that it's my fresh install:
    oracle@enalab13:/u01/app/oracle$ ll fast_recovery_area/
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
    drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
    oracle@enalab13:/u01/app/oracle$ ll -R fast_recovery_area/
    fast_recovery_area/:
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
    drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
    fast_recovery_area/ocpdb:
    total 9528
    -rw-r----- 1 oracle oinstall 9748480 2010-11-02 11:34 control02.ctl
    fast_recovery_area/OCPDB:
    total 2
    drwxr-x--- 3 oracle oinstall 512 2010-11-02 11:24 archivelog
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 onlinelog
    fast_recovery_area/OCPDB/archivelog:
    total 1
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:24 2010_11_02
    fast_recovery_area/OCPDB/archivelog/2010_11_02:
    total 47032
    -rw-r----- 1 oracle oinstall 48123392 2010-11-02 11:24 o1_mf_1_5_6f0c9pnh_.arc
    fast_recovery_area/OCPDB/onlinelog:
    total 0Some interresting output asked earlier in post:
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     4
    Next log sequence to archive   6
    Current log sequence           6
    SQL> show parameter recovery
    NAME                                 TYPE        VALUE
    db_recovery_file_dest                string      /u01/app/oracle/fast_recovery_area
    db_recovery_file_dest_size           big integer 4032M
    recovery_parallelism                 integer     0
    SQL> show parameter control_files
    NAME                                 TYPE        VALUE
    control_files                        string      /u01/app/oracle/oradata/ocpdb/control01.ctl,
                                                         /u01/app/oracle/fast_recovery_area/ocpdb/control02.ctl
    SQL> show parameter instance_name
    NAME                                 TYPE        VALUE
    instance_name                        string      ocpdb
    SQL> show parameter db_name
    NAME                                 TYPE        VALUE
    db_name                              string      ocpdb
    SQL> show parameter log_archive_dest_1
    NAME                                 TYPE        VALUE
    log_archive_dest_1                   string
    log_archive_dest_10                  string
    log_archive_dest_11                  string
    log_archive_dest_12                  string
    log_archive_dest_13                  string
    log_archive_dest_14                  string
    log_archive_dest_15                  string
    log_archive_dest_16                  string
    log_archive_dest_17                  string
    log_archive_dest_18                  string
    log_archive_dest_19                  string
    SQL> show parameter %dump_dest 
    NAME                                 TYPE        VALUE
    background_dump_dest                 string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    core_dump_dest                       string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdump
    user_dump_dest                       string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/traceI think, next time, I'll install everything regarding oracle SID in upper case...
    Maybe it's details that I don't need to care about... I seems that something is happening bad with the management of fast_recovery_area...
    Thanks
    Bruno

  • Question regarding XI/PI and Idoc processing.

    Hi,
    I'm learning XI/PI and I have a question regarding Idoc processing in PI.
    We need to configure communication between our BW system and our PI system using Idocs.
    The Idocs are sent from BW to our PI systems and are then sent back to the BW system, there are no third system involved. The idocs are only between PI and BW.
    Our BW system is already connected with many other R3 systems by using WE20 / WE21 and RFC's and everything works perfectly.
    While I configure this communication between BW and PI it seems that PI is passing the Idoc to the Idoc adapter, converts it to xml and tries to find a receiver for the particular Idoc. I see the error "NO_RECEIVER_CASE_ASYNC" in SXMB_MONI.
    Is this a normal behaviour in PI ? Why is PI thinking that the Idoc needs to be sent to another system when it is infact for itself ??
    Thanks and regards
    Remi

    Hi
    for error "NO_RECEIVER_CASE_ASYNC" in SXMB_MONI.
    This problem may occur due to one of following reasons, so check
    1 service is active in message? transaction SICF and activate service sap/xi/engine (right click, activate)
    2 Is the port 8001 defined in the services on the smicm under services?
    3 Check the roles assign PIDIRUSER
    http://help.sap.com/saphelp_nw04/helpdata/en/56/361041ebf0f06fe10000000a1550b0/content.htm
    role: SAP_XI_ID_SERV_USER attached to it
    Also Check Whether PIDIRUSER has following role
    SAP_SLD_CONFIGURATOR
    SAP_XI_RWB_SERV_USER
    SAP_XI_RWB_SERV_USER_MAIN
    Regards
    Abhishek

  • A few questions regarding SAP EWM and WM

    Hello,
    I have a few general questions regarding the differences between EWM and WM:
    1) What are the benefits of EWM-MFS compared to WM + TRM (especially in terms of SPS)?
    2) The Quality Inspection Engine (QIE) can also be used by SAP WM, right?
    3) There is RFID-support in EWM, so EWM is able to communicate directly with SAP Auto-ID, right?
         But I have heard that SAP PI is necessary in some cases, when and why?
    4) Is there something new in EWM regarding goods receipt processing?
        I have read that the splitting of inbound delivery items is possible in EWM in case of missing inbound delivery items. Is this really  a new feature?
    5) EWM can easily be connected to SAP BW for reporting purposes, what about WM?
    6) What about scalability if the warehouse grows?
    7) Is there any information about the costs of using EWM compared to WM and vice versa?
    I appreciate any kind of help.
    Thank you.
    Dennis

    Hi,
    1. What does SAP offer as a product for dWM? Is it a u201Cspecialu201D installation of the SAP framework dedicated to WM or is it a standard ECC box where only the WM module is used?
    There are two version of DWM. One is Decentralized WM as a part of ECC and another one is EWM as a part of SCM. Both are decentralized.
    2. My understanding is that the interfaces between ERP and dWM can support some non-real time operations (like when the main ERP system is down, the dWM can still perform some operations). Considering that the transactional interfaces are based on BAPIs, how does SAP achieve this interfacing in non-real time environments? I am thinking you can complete the different processing unless both systems are up
    When it comes to interfaces, DWM needs Deliveries from ERP. That's it, WM can function from there independent of ERP system. But, WM defenitely needs to communicate back PGI and PGR and other posting changes . So, in case ERP is down, even though PGI / PGR is done at WM end, they may not be communicated back to ERP. But WM generates PGI/PGR IDOCs which can always be reprocessed at WM end to resend them to ERP so that Inventory levels are accurate.
    Hope that helps
    Thanks
    Vinod.

  • Question regarding stacks, searches and smart collections

    Apologies if this is considered a 'basic' question - but I hope that someone can help me.
    I'm currently in the process of upgrading/migrating a reasonably large Photoshop Elements 6 catalog where I've made extensive use of hierarchical folder structures, keywords and star ratings to quickly locate photos using a range a different techniques.I've successfully upgrade/migrated the Photoshope Elements catalog into Lightroom 3 and as part of the verification that everything has come across OK - I've done some comparisons of catalog searches in Elements and Lightroom and seem to be getting some strange results which I'm not sure if this is simply how things work or if I'm doing something wrong. I think part of the issue is caused by the fact that Elements always does destructive edits - so I never edited original photos in Elements so made extensive use of copied photos and stacks - but this didn't seem to cause any issues as Elements seem to keep things straight.
    In Elements, the result of a query or Smart Collection might return 18 stacks of photos (with most of the stacks containing multiple photos) - but for most purposes Elements simply treated this as 18 seperate photos and simply ignored all of the photos under the top of the stacks. 
    Now in Lightroom I get different results depending on how the photos are identified. If I use either a keyword or rating search using the 'Right Hand' panel - I get a photo count returned which is always much higher than 18 but Lightroom seems to retain the stacks so only displays 18 different stacks,  However, if I put the same search criteria into a Lightroom Smart Collection - it retrives and displays ALL of the photos in the 18 stacks (so it displays 2-3 times more photos) and I can't seem to find a way to get the Smart Collection to honour these stacks. I know that I could probably alter each of my photo stacks and change the rating or keyword of all of the photos under the top of the stack - but trust me this is a huge amount of work!!
    Is this simply the way Lightroom works?  I can partially understand and accept the way direct keyword or rating searches work using the 'Right Hand' panel - although the photo counts are different from what I've got used to in Elements the way the photos are actually displayed is not that different. However, what really confuses me is the completely different way Smart Collections work when compared to the 'equivalent' direct query.  Have I missed something?  Or is this some form of technical issue/bug/future enhancement request?
    Also, on a slightly related issue - I've noticed that keywords with spaces (or other special characters) seem to cause issues for Lightroom - while Elements seems to cope with these OK. From the reading I've done it looks like one of the most common suggestions is to simply remove the spaces (..etc.) in the keywords - is that what most people would recommend??
    Any help, advice or other suggestions would be appreciated.
    Kind Regards .... Jerry

    I'm currently in the process of upgrading/migrating a reasonably large Photoshop Elements 6 catalog where I've made extensive use of hierarchical folder structures, keywords and star ratings to quickly locate photos using a range a different techniques
    Please tell us EXACTLY the steps you are using to move your PSE catalog to Lightroom.
    However, if I put the same search criteria into a Lightroom Smart Collection - it retrives and displays ALL of the photos in the 18 stacks (so it displays 2-3 times more photos) and I can't seem to find a way to get the Smart Collection to honour these stacks. I know that I could probably alter each of my photo stacks and change the rating or keyword of all of the photos under the top of the stack - but trust me this is a huge amount of work!!
    I believe this is how Lightroom was designed to work. Smart collections don't recognize that some photos are at the bottom of the stack.
    Also, on a slightly related issue - I've noticed that keywords with spaces (or other special characters) seem to cause issues for Lightroom - while Elements seems to cope with these OK. From the reading I've done it looks like one of the most common suggestions is to simply remove the spaces (..etc.) in the keywords - is that what most people would recommend??
    I have no trouble whatsoever using keywords that have spaces in them. I have keywords that are "New York", "New Jersey", "Union Pacific Railroad", etc. Special characters, such as a comma, will probably cause trouble. Exactly what are you doing where spaces in keywords are not working properly?

  • A few question regarding oracle Vm (and virtualizing Db 11g)

    Hi,
    I've just started evaluating Oracle vm for our next deployment.
    One of the systems I'll need to virtualize is Oracle db 11g. As far as I understand the oracle DB template only comes with ASM? I would prefer to use LVM. What is the best way to install db into virtual machine without asm? Should I start with EL 5.2 and configure it, or could I somehow use the EL that comes with the template?
    Regarding LVM: I'm planning to create different volume groups, each with just one logical volume. These volumes would be mounted to guest as /u01, /u02 ... and so on. All volumes will be created on mirrored disks/partitions. Later, I could just move, expand, strip volumes as db usage will require. Does that sound ok?
    And another question regardin oracle vm: in the manual, chapter 4.6.2 says: "Install an operating system. This may be done a number of ways.
    ■ Install an Oracle VM Server-enabled operating system from CD-ROMs...."
    Is there a list of server-enabled operating systems anywhere? And maybe more detailed explanation of this step?
    Thanks
    Jernej

    Jernej Kase wrote:
    One of the systems I'll need to virtualize is Oracle db 11g. As far as I understand the oracle DB template only comes with ASM? I would prefer to use LVM. What is the best way to install db into virtual machine without asm? Should I start with EL 5.2 and configure it, or could I somehow use the EL that comes with the template?I would probably start with the standard EL5 template instead. The Database template is designed to automatically configure and provision the database with ASM and would probably take longer to dismantle.
    >
    Regarding LVM: I'm planning to create different volume groups, each with just one logical volume. These volumes would be mounted to guest as /u01, /u02 ... and so on. All volumes will be created on mirrored disks/partitions. Later, I could just move, expand, strip volumes as db usage will require. Does that sound ok?It sounds OK, but ASM does all of that and more. It is also faster (particularly on Oracle VM, as it uses direct access to the disks). ASM also does automatic data levelling and striping. With 11g, I would strongly recommand ASM over LVM for your storage.
    Is there a list of server-enabled operating systems anywhere? And maybe more detailed explanation of this step?There isn't a list -- that's possibly badly worded as well. If you don't want to use one of the paravirtualized Oracle Enterprise Linux templates available on eDelivery, you can use any operating system installation CD in ISO format. Note that installations from an ISO are done as fully virtualized guests (i.e. hardware virtualized) and require Intel VT-x or AMD-s extensions to be present and enabled. Hardware virtualized guests are also not as fast as paravirtualized Linux guests.
    Oracle only certifies Database 11g running on Enterprise Linux in paravirtualized mode. The simplest way to deploy this is to use the provided templates.

  • A simple question regarding c:out and ADF

    I have a small problem which I can't figure out :-(
    The solution is probably dead easy, but I don't see it ..
    The problem is :
    I have a jsp-file which have the following tags :
    <c:out value="${bindings.EstimatedInterest}" />
    <c:out value="${bindings.RemainingAmount}" />
    And I want to have one more field which just adds these two together.
    But I get this error :
    Attempt to coerce a value of type "oracle.jbo.uicli.binding.JUCtrlAttrsBinding" to type "java.lang.Long" (null)
    Can anyone pls point me in the right direction ?
    With regards, TA

    This may help. I asked Muench what I think is about the same question:
    http://radio.weblogs.com/0118231/2004/04/28.html
    in general:
    <c:out value="${(bindings.Returned.inputValue.value / bindings.Total.inputValue.value) * 100}"/>
    (this may not be the same as your question but will point you in a direction)

  • Waveburner;  a few Questions regarding track files and Session Data

    After a bit of negotiating, I'm almost ready to burn. Was having problems with file locations and discovered that i had multiple files in different locations. Those have been eliminated and all relative track files are in one place now with the Song Data File as well.
    I then re-imported the individual aif tracks only to find that all of my fades and edits are GONE. It appears that I may have to DO THEM ALL OVER AGAIN! Please tell me that there is a work around.
    Is it possible to import only the Session Data related to the fades and edits(everything but the actual audio files), so as to avoid all this "do-again" work?
    I was also wonder if it is necessary, or perhaps a good idea, to change the File INFO for each of the individual audio track files from "OpenWith" iTunes (default setting) to "Open With": WAVEBURNER ?. (selectable on the aiff files, "Show More Info" window
    Currently the aif files are set to "OpenWith" iTunes (default)
    One last question please.
    Is Waveburner really up to snuff? How many people are using this for Mastering and Burning CD's? I'm about to start inserting some AU Mastering Plug Ins and I'm hoping for the best. Any other suggestions are greatly appreciated.
    G5 Dual 2.0/PBook G41.5Ghz/LogicPro7 Live5 Reason3.0 PansncDA7 TascamFW1804   Mac OS X (10.4.7)  

    You're grammer isn't bad jord, it was I who asked a plethora of questions under one Subject heading.
    When i open the project file(.wb3),
    The prompt says;
    "Please choose a replacement from the list below:"
    /Volumes/320GB HD/Users/ahuihou/.Trash/1 Track 01.aiff
    /Volumes/320GB HD/Users/ahuihou/.Trash/2 Track 02.aiff
    /Volumes/320GB HD/Users/ahuihou/.Trash/3 Track 03.aiff
    /Volumes/320GB HD/Users/ahuihou/.Trash/4 Track 04.aiff
    /Volumes/320GB HD/Users/ahuihou/.Trash/6 Track 06.aiff
    As you can see the files are in the trash.
    The first problem is that there are no actual files in the trash. Just an icon of a CD Disc which was dragged there to eject it, It is no longer in the disc drive/tray. Perhaps i should put the disc back in the drive and then update/replace the files afterwords. (The CD is scratched.)
    The strange thing is that i can still preview the track that's in the trash by selecting More Info and playing it on the little Quicktime player bar. I guess the file is in a cache somewhere?
    I've looked everywhere on all drives for the specific files but can not find them anywhere.
    In the Replacement prompt there is a place that I can check to "Search in the same folder for subsequent files" button on the "Replacement" prompt.
    which "same folder" is it refering to?
    The Trash folder where the old files are or the Folder where the actual .wb3 project file is? I do have all of the correctly named song files/regions in the same folder with the project file.
    maybe i should just start again. i'm not one to give up easy though and would much prefer to "beat" the computer at the game.

Maybe you are looking for

  • Family Share and adding Gear S watch

    Why do I have to switch plans to add a watch to my plan? That is going to cost me about $60 more a month to switch to share everything plan from family share plan. I will lose my $20 promo on 3 lines.

  • Performance issues, possibly video.

    My computer is by no means new (p4 2.4, 2GB, 128MB AGP video), however, I get particularly crappy performance through most simple tasks. Scrolling in any browswer is SUPER choppy (CPU usage of browser & Xorg spikes when scrolling). Window performance

  • How to export sent email from n8?

    Hi, I'm using the standard email application on my N8 (no Anna yet) with a POP3&SMTP account. I need to export the messages I sent with my N8. They are stored in two "sent" folders I need to import these messages into the email software on my compute

  • When do you receive gamers club welcome packet?

    Hello. I was wondering if anyone knows when to expect the gaming welcome packet for signing up for gamers club unlocked. I joined 2 months ago and while the account does get the 10% trade bonus and 20% off new games, I haven't gotten any emails or re

  • Struggling to pay Bill

    I have checked my on-line billing account and it states an amount I need to pay before i can go on a monthly payment plan.  Only problem is I can't pay that amount and also can't pay the whole bill due to my partner just being made redundant great ti