(Another) Question Regarding Logic's Pan Law

Hello, everyone. Just as I thought my previous pan law thread absolutely exhausted every possible question that could be asked on Logic's PL, another one popped up, albiet this one seems much more easilly addressed (or so I hope):
When Logic's PL is set, and you bounce a track, will the resulting bounced file when listened to OUTSIDE of Logic (i.e. on its own) be heard WITH the pan law or without the pan law? In other words, if I bounce the track on a -3dB PL, where my center signal is -3dB lower than my left and right signal, will I then hear the bounced track when played outside of Logic with the -3dB lowered middle? or will I hear it with the center signal a little louder now?
The reason I ask this is because if the latter scenario is the case (i.e. Logic's PL gets ignored when the bounced track is listened to outside of Logic) - which, incidentally, might just be the case given that some tracks I bounced and then listened to seemed to be a little louder in the middle than I rememebered them being - then what on earth is the purpose of the pan law itself?? If the PL only exists and functions so long as you're in the vaccume of Logic, then that obviously won't work because it doesn't apply OUTSIDE of Logic, and, consequently, gives you inacurate mixes.
Ugh . . .
Thank you everyone for your responses in advance . . .
And I promise this thread won't be as long as the last one.
Javier Calderon
Message was edited by: Javier72
Message was edited by: Javier72
Message was edited by: Javier72

Hi,
So then to hear back how a -3dB PL bounce sounds in Logic, you have to listen to it with the PL readjusted to 0 dB, correct? otherwise Logic will add another PL to the track that was already bounced with the PL on it . . .
This statement is incorrect. If you playback a single Stereo file in Logic, and do nothing to it pan-wise, it will come out sounding exactly the same as the mutlitrack.
You need to really understand why Pan Law even came to be.
Back in the analogue hardware days of yore, there were some engineers that had issues with certain resistive pots which were chosen for panoramic control of a mono sound source on a stereo mixer.
They invented a system, for their particular board, whereby a "panoramic" pot would be fed by TWO versions of the same signal, instead of one signal. This cause the center to be about 3dB LOUDER than the hard left and right, because, when you add the same signal in a mixer twice, generally you get a total gain of 3dB.
So, they did some fancy pantsy calculations and figure out they could add RESISTORS to these two equal signals, and bring down that center by 3dB. The side effect became that when you panned hard left or roght, you were now getting 3dB LESS signal at those panoramic positions.
When DAW's came along, they orignally did not behave like their analogue mixer counterparts. they mostly were set to 0 pan law, meaning the center would be louder. After much complaining by the audio industry some companies changed this, some with options, like eMagic Logic, and some simply changed it period (ProTools).
So now you have the option to have your DAW behave more like an analogue desk, or not. the option is entirely up to you. although, if you want to mix music and have it sound like older, analogue recordings, you should choose -3dB.
Cheers

Similar Messages

  • Word 2011 for Mac: Advanced question regarding the navigation pane--aka sidebar

    Hi everyone--
    I'm a new Mac owner, with a Macbook Pro 13" 2.4 GHz Intel Core i5, with 8GB RAM, 256GB storage. I'm operating on the latest OS (Maverick), freshly purchased from the Apple store today (July 5, 2014).
    Can you help me figure out if there is some way, in Word 2011 for Mac, to use the navigation pane (aka Sidebar) to click and drag entire sections of the document to a new location? This was basic (advanced, but fundamental) functionality in every version of Word I've used in recent years on Windows machines, and it is the critical reason I purchased Word instead of using one of a dozen free options. My job involves managing and editing large documents--from 2500 to 90,000 words--and the navigation pane/sidebar is crucial to my sanity.
    Previously, I would open the navigation pane and it would show me the structure of my document based on the Heading types. I could click on a heading (say, a chapter title), inside the navigation pane, and drag it to a new location elsewhere in the document. So simple to rearrange the structure of large documents this way. Now, in the Word 2011 for Mac, I can call up the navigation pane (now called the "sidebar") and view the structure of the doc, but I can't actually click and drag anything in the navigation pane.
    Other than this, so far my switch to Mac has gone swimmingly. I love the machine, and am amazed at how much cleaner and easier it is to set up than Windows machines. I'm so frustrated that I even have to interface with Microsoft any more, but this one piece of functionality is critical to me. 
    Thank you in advance for any help you can provide.
    Heather

    Dear Heather,
    I don't know the specific answer to your question.
    But as a new Mac owner, you should make sure that you are using the very latest version of Word 2011 for Mac.
    My recommendation, if you haven't already done this, is to open Word and do Help > Check for Updates from Word's menu, and install any updates Microsoft has released.
    They typically issue updates once or twice a month.
    Enjoy your Mac!

  • Another question regarding smil files and NPR

    I'm having troube listening to NPR smil files. I've read the previous posts regarding this issue, but seem to be experiencing a different twist. There are no files downloading to play when I click on "listen". When I click the "listen" button a QT safari screen appears and I get a spinning arrow with heading "downloading". However, nothing ever downloads to my desk top. This is very strange as just last weekend I downloaded NPR files and they were saved to my desktop as Real Time files and I was able to play with them with no problem. I've done nothing to QT or RT in the mean time.
    I've tried the previous suggestions posted of going to the Get Info section of a previously downloaded smil file and changing the settings to say all smil files should play with RT - restarted my computer - nothing. Help!

    Was able to figure this out myself. Had to go into QT preferences and elect to not play smil type files and go in to RT and re-elect to play smil files using RT. Restarted my computer and it works like a charm.

  • Another question regarding self...

    Can someone describe the difference between these two codes:
    1.
    MyViewController *aViewController = [[MyViewController alloc] initWithNibName:@"HelloWorld" bundle:[NSBundle mainBundle]];
    self.myViewController = aViewController;
    [aViewController release];
    2.
    self.myViewController=[[MyViewController alloc] initWithNibName:@"HelloWorld" bundle:[NSBundle mainBundle]];
    I get the impression that there is a memory leak in the second bit of code but not the first?

    The answer depends largely on whether or not automated garbage collection is enabled. Assuming it is, I'd say the differences are predominantly stylistic.
    #1 will eat up a tiny bit more stack space with its local declaration of aViewController, but it will also be easier to debug since that variable will be readily visible without a lot of extraneous disclosure triangle drilling into whatever object self is an instance of.

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Another question about adding music to iPhone

    Apologies for yet another question regarding adding music to an iPhone but I am completely stuck.
    I bought a new laptop in January (it runs windows 8). I've authorised it and synced my phone to it before. I've used it to add music before.
    Lately I've just been buying music directly from itunes on my phone but I wanted to add some music from my older collection that is on an external harddrive.
    I've gone through the process of syncing the phone again, which has wiped whatever was on there. I made sure I'd ticked on the "manually manage music" box. 
    It's put all my purchased music back but it still will not let me drag and drop music from my external harddrive. When I hover over with the file it has "link" but it won't actually send the music to the phone.
    Is there a way for me to do this without putting music on to my itunes library? I don't like itunes and I definitely don't want to add the music to my laptop as it defeats the purpose of having an external harddrive! Sorry for such a long-winded explanation.

    Just to add... I've now tried adding music to the library (getting desperate here) and it's not letting me do that either. Just says 'link'.
    This is the most frustrating thing ever. Why are the simplest of tasks made so difficult? It seems like it only works if you buy the music from the iTunes store.

  • Question regarding Dashboard and column prompt

    My question regarding Dashboard and column prompt:
    1) Dashboard prompt usually work with only for columns which are in subject area. In my report I've created some of the columns which are based on other columns. Like I've daysNumber column that is based on two other columns, as it calculates the difference of two dates. When I create dashboard prompt I can't find this column there. I need to make a prompt on this column.
    2)For one of the column I've only two values 1 and 0. When I create prompt for this column, is it possible that in drop down list It shows 'Yes' for 1 and 'No' for 0 and still filter the request??

    Hi Toony,...
    I think there was another way of doing this...
    In the dashboard prompt go to Show option > select SQL Results from dropdown.
    There you need to write your Logical SQL like...
    SELECT CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END FROM SubjectAreaName
    Here.. Periods.Year is the column which is already exists in repository's presentation layer..
    and difference of date functionality is the code or formula of column which you want to show in drop-down...
    Also write the CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END code in fx of that prompt.
    I think it helps you in doing this..
    Just check and inform me if it works...
    Thanks & Regards
    Kishore Guggilla
    Edited by: Kishore Guggilla on Oct 31, 2008 9:35 AM

  • I have some questions regarding setting up a software RAID 0 on a Mac Pro

    I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
    These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
    A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
    The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
    Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
    It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
    My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
    I should be most grateful to receive advice regarding the following questions:
    QUESTION 1:
    I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
    If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
    Or would I have to reinstall the OS, applications and software again?
    I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
    QUESTION 2:
    I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
    Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
    Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
    QUESTION 3:
    Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
    Or would this not be of any use anyway, in the event of a single drive failure?
    QUESTION 4:
    Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
    QUESTION 5:
    If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
    Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.

    Well said.
    Steps to set up RAID
    Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.
    Recovering from a hard drive failure on a mirrored array
    1. Open Disk Utility in (/Applications/Utilities).
    2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
    3. If an issue with the disk is indicated, click Rebuild.
    4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
    5. Repeat steps 1 and 2.
    6. Drag the icon of the new disk on top of that of the removed disk.
    7. Click Rebuild.
    http://support.apple.com/kb/HT2559
    Drive A + B = VOLUME ONE
    Drive C + D = VOLUME TWO
    What you put on those volumes is of course up to you and easy to do.
    A system really only needs to be backed up "as needed" like before you add or update or install anything.
    /Users can be backed up hourly, daily, weekly schedule
    Media files as needed.
    Things that hurt performance:
    Page outs
    Spotlight - disable this for boot drive and 'scratch'
    SCRATCH: Temporary space; erased between projects and steps.
    http://en.wikipedia.org/wiki/StandardRAIDlevels
    (normally I'd link to Wikipedia but I can't load right now)
    Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
    To help understand and configure your 2009 Nehalem Mac Pro:
    http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
    http://macperformanceguide.com/
    http://www.macgurus.com/guides/storageaccelguide.php
    http://www.macintouch.com/readerreports/harddrives/index.html
    http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
    http://kb2.adobe.com/cps/404/kb404440.html

  • A few question regarding oracle Vm (and virtualizing Db 11g)

    Hi,
    I've just started evaluating Oracle vm for our next deployment.
    One of the systems I'll need to virtualize is Oracle db 11g. As far as I understand the oracle DB template only comes with ASM? I would prefer to use LVM. What is the best way to install db into virtual machine without asm? Should I start with EL 5.2 and configure it, or could I somehow use the EL that comes with the template?
    Regarding LVM: I'm planning to create different volume groups, each with just one logical volume. These volumes would be mounted to guest as /u01, /u02 ... and so on. All volumes will be created on mirrored disks/partitions. Later, I could just move, expand, strip volumes as db usage will require. Does that sound ok?
    And another question regardin oracle vm: in the manual, chapter 4.6.2 says: "Install an operating system. This may be done a number of ways.
    ■ Install an Oracle VM Server-enabled operating system from CD-ROMs...."
    Is there a list of server-enabled operating systems anywhere? And maybe more detailed explanation of this step?
    Thanks
    Jernej

    Jernej Kase wrote:
    One of the systems I'll need to virtualize is Oracle db 11g. As far as I understand the oracle DB template only comes with ASM? I would prefer to use LVM. What is the best way to install db into virtual machine without asm? Should I start with EL 5.2 and configure it, or could I somehow use the EL that comes with the template?I would probably start with the standard EL5 template instead. The Database template is designed to automatically configure and provision the database with ASM and would probably take longer to dismantle.
    >
    Regarding LVM: I'm planning to create different volume groups, each with just one logical volume. These volumes would be mounted to guest as /u01, /u02 ... and so on. All volumes will be created on mirrored disks/partitions. Later, I could just move, expand, strip volumes as db usage will require. Does that sound ok?It sounds OK, but ASM does all of that and more. It is also faster (particularly on Oracle VM, as it uses direct access to the disks). ASM also does automatic data levelling and striping. With 11g, I would strongly recommand ASM over LVM for your storage.
    Is there a list of server-enabled operating systems anywhere? And maybe more detailed explanation of this step?There isn't a list -- that's possibly badly worded as well. If you don't want to use one of the paravirtualized Oracle Enterprise Linux templates available on eDelivery, you can use any operating system installation CD in ISO format. Note that installations from an ISO are done as fully virtualized guests (i.e. hardware virtualized) and require Intel VT-x or AMD-s extensions to be present and enabled. Hardware virtualized guests are also not as fast as paravirtualized Linux guests.
    Oracle only certifies Database 11g running on Enterprise Linux in paravirtualized mode. The simplest way to deploy this is to use the provided templates.

  • FI-GL: Question regarding "alternative account no." - Why in BSEG?

    Hi all,
    I have another question. I think this is really a little bit tricky this time (I spend a lot of time investigating this question but couldn't find an answer).
    It's regarding the field "alternative account no." in FS00 (table SKB1-ALTKT) and it's about the design of the SAP system regarding this feature (alternative chart of account).
    We've one company code (Belgium) in the system which uses alternative account numbers for a country specific local chart of accounts. The country specific chart of accounts BE01 is assigned to this company code in OBY6 besides the operative chart of accounts. The company code is in production for some years so there are many postings up to now. So far so good. Now, they have found an error in the assignment from alternative account to operative account. As a result, they want us to evaluate the option to change the alternative account number for this account in the transaction FS00.
    For sure, it's not possible to change the alternative account no. in FS00 as long as there is a balance on this account. But if you post this balance to a temporary / technical account, it's possible to change the alternative account no. If you do this, SAP will give you the message FH 165 which is a warning and not a error message (so you can save the changes). After that, it's possible to create an inverse posting in order to get the balance back to this account.
    Now to the strange part (for me): Why does SAP record this alternative account no. for each document line item in the BSEG table in the field BSEG-LOKKT? This is also what the message FH 165 is about. For me, this does not really make sense, but I'm sure that I miss a detail somewhere.
    I mean, you know for example that the alternative account A belongs to the operative account B (via FS00 / SKB1-ALTKT). Therefore, why do you need to write this account to every single line item in BSEG? Why doesn't SAP just substitute the operative account no. with the alternative account no. in all relevant reports (RFBILA00, balance display S_ALR_87012277...).
    The background of my question is now: If I zero out the balance and change the alternative account number in FS00, then all postings up to now won't be changed automatically. So for all postings up to now, the old alternative account no. remains in the BSEG table. For all new postings, the new alternative account no will be in the BSEG table. So from my understanding, there will be an inconsistency in the database if I change the alternative account no.
    In order to evaluate whether I can change the alternative account no. without risking inconsistencies, I would now need to know how this field (BSEG-LOKKT) is used in the SAP system. Is it used in any special reports or for what purpose is it in the BSEG table? What about the balance table GLT0? Is there also a special balance table for the alternative account no. in the system or how are the balances (e.g. for RFBILA00) calculated for the alternative chart of accounts?
    I would be very glad for any help as I am really at the end with my SAP knowledge on this point.
    Thank you in advance and sorry for the long (and maybe confusing?) posting.
    Regards,
    Peter

    hi Peter,
    I believe the system is perfectly designed in this case
    Let's say you have G/L account A in Operative CoA, which is linked to account 1 in Alternative CoA. Than the local law changes and you have to link account A to account 2 from 01.01.2008. The system works perfectly: All the items which were posted earlier are still shown on Alternative account 1 (according to local law for last year), while the new items will be shown on account 2 (according to local law for the new year).
    BSEG-LOKKT is only used for reporting, does not control anything. On the other hand there won't be any inconsistency in your system, if you change the alternative account number acc. to business needs.
    hope this helps
    ec

  • Question Regarding Name and Address Clensing

    Hello,
    Can someone help me with understanding the Name and Address Cleansing operatior of OWB.
    I am currently using OWB Client 9.2.0.2.8
    Before using the Name and Address operator should you do any configuration for it to function? Based on the viewlet which i tried there were no configuration steps and once i finished doing the steps in the demo there were rows inserted but with *null data.
    Is Trillium packaged with the OWB or should you still purchase it?
    Thank You

    Thank you for the information Mark, I still have another question.
    Here is my situation:
    I have been tasked to integrate a 3rd party DQ ventor to the name and address cleansing operator of OWB. In relation to that I have downloaded the OracleAdaptorKit provided by Oracle for such development. The problem is I still dont really understand how the whole thing works. My understanding is that once you use the Name and Address operator in a Map and execute the it OWB will call the Name and Address Server which will trigger the Adaptor (which should be created by the vendor or me in this case) that will do the Parse() function based on the libraries of the 3rd party DQ vendor. Is my understanding correct?
    My second question is on how will the Name and Address Engine call the Adaptor(JAVA applet?) which will use the Parse() function and what are the values that Name and Address Engine throw to the Adaptor? And how will you know if you have successfully created a connection between the Name and Address Engine and the Adaptor you created?
    My third question is in regards to the components to be created. So given that the Name and Address Engine is by default installed with OWB does that mean that the only thing left to program are the Adaptor and Libraries?
    I hope you can help me, Thank You

  • Question Regarding MIDI and Sample Accuracy

    Hi,
    I have 2 questions regarding MIDI.
    1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
    2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
    I've looked through the manual, and couldn't find any specific answer to these questions.
    Thanks!
    Message was edited by: Matthew Usnick

    Ok, I've done some experimenting, and here are my results.
    I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
    Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
    I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
    SO.
    Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
    This is very good news, and glad I worked this out!
    (if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
    Message was edited by: Matthew Usnick

  • I have a question regarding my txt/alert tones. I recently updated my iPhone 5's software without first backing it up on iCloud or my computer (oops). After my phone finished updating, i lost the new ringtones and txt/alert tones i had bought.

    I have a question regarding my txt/alert tones. I recently updated my iPhone 5's software without first backing it up on iCloud or my computer (oops). After my phone finished updating, i lost the new ringtones and txt/alert tones i had bought. I connected my iPhone to my computer and sync'd it hoping that it could download my purchases from the itunes store and then put them back on to my phone but no such luck. When i look at my iphone on my computer and look at tones, they do not show up at all. However, if i click on the "On This iPhone" tab and click on "Tones" the deleted ringtones and altert tones show up...they are just grey'd out and have a dotted circle to the left of them. I also tried to go to the iTunes store on my phone and redownload them and it tells me that i have already purchased this ringtone and asks me if i want to buy it again. I press Cancel. I know when i do that with music it would usually let me redownload the song without buying it again but it wont let me with the ringtones...so how do i get them back? any help would be greatly appreicated

    Greetings,
    I've never seen this issue, and I handle many iPads, of all versions. WiFi issues are generally local to the WiFi router - they are not all of the same quality, range, immunity to interference, etc. You have distance, building construction, and the biggie - interference.
    At home, I use Apple routers, and have no issues with any of my WiFi enabled devices, computers, mobile devices, etc - even the lowly PeeCees. I have locations where I have Juniper Networks, as well as Aruba, and a few Netgears - all of them work as they should.
    The cheaper routers, Linksys, D-Link, Seimens home units, and many other no name devices have caused issues of various kinds, and even connectivity.
    I have no idea what Starbucks uses, but I always have a good connection, and I go there nearly every morning and get some work done, as well as play.
    You could try changing channels, 2.4 to 5 Gigs, changing locations of the router. I have had to do all of these at one time or another over the many years that I have been a Network Engineer.
    Good Luck - Cheers,
    M.

  • Questions regarding using the .monitor command to retur a animated image and we would like feedback to a designed webpage that is monitoring a 5kW windturbine:)

    I'm embedding a front panel image in an existing HTML dokument. I would like to use the command .monitor in the URL together with the refresh command so the VI automatic will reload every 20 secund. This actual work, but simultaneous I want to have the possibility to refresh manually so I don't have to wait 20 sec before new values is shown in the display. Is this possible to do?
    Another question: Since the real time display updates 1-2 times a secund the command .monitor is used to get a animated picture of the Real Time Display.
    There are several ways to add animation on to web pages. The techniques used h
    ere are the �server push� and �client pull�which makes the browser repeatedly reloads a changing inline image to provide crude animated sequences. This is not the most efficient way as this result�s in an image being re-transmitted for each frame of the animation. The command .monitor with the attribute refresh and lifespan in the URL trigger this �server push� and �client pull� techniques.
    I use this automatic refresh uploading of the display so that it each time shows different values, is this called crude animation?Then I'm wondering what I'm suppose to use the command lifespan to?I can't see the use of it in my display.....?
    link to the webpage so you can have a look at the display:
    http://134.7.139.176/.monitor?Real%20Time%20Performance.vi&refresh=20
    This is a project that I'm working together with another Norwegian friend. WE are very happy for feedback on our web page and displays go to: http://www.ece.curtin.edu.au/~peersena/ if you would like to view itThanks

    Annis,
    One of the other things to keep in mind is that the generation of an image does take some computing power so having the generation and the acquistion on the same machine is not always ideal. If you're using the machine that is publishing the front panel just to collect data it's not so much of an issue.
    If you really want to monitor in "Real-Time" using Remote Panels (requies LabVIEW 6.1) is your best option. This posting has more information on using Remote Panels and links to some live examples:
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=506500000008000000C0660000&UCATEGORY_0=_49_%24_6_&UCATEGORY_S=0&USEARCHCONTEXT_TIER_0=0&USEARCHCONTEXT_TIER_S=0&USEARCHCONTEXT_QUESTION_0=web+control&USEARCHCONTEXT_QUESTION_S=0
    Remote panels makes it possible to control the application remotely as well.
    With .monitor the only way I've been able to manually refresh is to "Shift+Refresh" on the browser.
    Regards,
    Kamran

  • Question regarding Request Notification Template - OIM 9.1.0.2

    Hi All,
    I have a question regarding notification generated when a request is raised. Currently, the body of the notification is referring the requestor who raise the request (the body of email has attributes like <%Requester Info.First Name%>, <%Requester Info.Last Name%>). Its fine if the requestor is raising the request for him/her self. However, if the requestor is raising the request on behalf of another user, then this notification is causing confusions, since its referring the requestor only in its body and not the beneficiary.
    Is there a way to include the end beneficiary's details in the body of the notification?
    Please help in this regard
    Regards
    Vinay

    Hi Gurus,
    Any idea on this?
    Regards
    Vinay

Maybe you are looking for

  • Is a USB3 ssd + Thunder bolt Lacie 256 gig drive compatible with a macbook 2008 with usb2 ports ?

    Is a USB3 SSD + Thunder bolt Lacie 256 gig drive compatible with a Macbook 2008 with USB2 ports ? I just purchased a Lacie SSD drive which ejects its self without any warning. Disappears from finder and disk utility. A message come up saying that the

  • Why New-SPConfigurationDatabase and other commands are not recognized?

    Hello Community     In Sharepoint 2013 Server I am an Administrator and in SQL Server 2012 I am a securityadmin, db_owner and sysadmin.  In the I am also a domain user and domain administrator.     Below is what is returned from the Powershell comman

  • HELP! Adding video effects

    Alright, so I am new to Macs, I just got my first for Christmas. It's the newest model Macbook. I'm using iMovie 08, and for some reason I just can't figure out how to add effects. I've watched the tutorial video numerous times, and it says "it's so

  • Network Installer has quit due to an unexpected error. (exit code 133)

    Hi guys. Our school just purchased some new 1.83 Intel Mac minis, and I'm trying to image them using our Powermac (running server 10.4.9). The minis won't boot from the net install image though, the spinning wheel just spins under the grey apple icon

  • Struts - using html:image for link

    I would like to use an image for a link but I am not sure how to do this. Right now I'm using the <html:link> tag and using the action mapping. Can this be done using the <html:image> tag? Here's an example of what I'm currently doing. <html:link act