Dumb question regarding CPU architecture

The question I'm about to ask must seem like really basic knowledge to most of you, so first of all, I apologize to subjecting you all to such a dumb question. Anyway -
I am looking to try ArchLinux out. I'm quite experienced with the two major commercial platforms, have used several other "no setup" Linux distros, and am comfortable with the command line for the most part. Arch, I thought, would be interesting to try out - something a little more to sink my teeth into.
So I hopped on over to the download page, where I am asked to choose the file appropriate for my processor's architecture. Thing is, as a longtime Mac user who has only recently become acquainted with Intel and AMD CPUs, I don't know some details. I know basics and the major differences between them, but not the architecture.
Without further ado, the question: Are desktop Core i5s i686 processors? Is the i686 build required or more suited for the processor than the generic x86-64 build?
Thanks!

iindigo wrote:
The question I'm about to ask must seem like really basic knowledge to most of you, so first of all, I apologize to subjecting you all to such a dumb question. Anyway -
I am looking to try ArchLinux out. I'm quite experienced with the two major commercial platforms, have used several other "no setup" Linux distros, and am comfortable with the command line for the most part. Arch, I thought, would be interesting to try out - something a little more to sink my teeth into.
So I hopped on over to the download page, where I am asked to choose the file appropriate for my processor's architecture. Thing is, as a longtime Mac user who has only recently become acquainted with Intel and AMD CPUs, I don't know some details. I know basics and the major differences between them, but not the architecture.
Without further ado, the question: Are desktop Core i5s i686 processors? Is the i686 build required or more suited for the processor than the generic x86-64 build?
Thanks!
i686 was (strictly speaking) the Pentium Pro processors
i786 was Netburst, i.e. Pentium4/D
i886 was Core2
i986 is Nehalem/Westmere (i.e. your i7/i5/i3)
That being said, the ix86 terminology was dropped due to copyright as I recall.  LOTS of stuff on Wiki on this topic--throw those CPU generation ix86 numbers in and read away.  You're using a Core2 or higher, there's no reason not to us 64bit.
Last edited by Skripka (2010-08-01 12:29:37)

Similar Messages

  • MOVED: question regarding CPU temps and latest bios (1.8)

    This topic has been moved to AMD64 nVidia Based board.
    question regarding CPU temps and latest bios (1.8)

    I'd believe the first bios's temps more than the second...  
    However what temps are given when using speed fan and/or everest? (in windows)

  • Question regarding CPU temps and latest bios (1.8)

    Hi, just build my first system using an MSI board. The K8N Neo2 Platinum with an A64 3500+ Venice. It came with bios version 1.5 and everything went smooth. Been running stable for a few days now. Cpu temps were anywhere from 38 (idle) to 50-54 under load. Though most of the time they were in the mid 40's. Which are acceptable temps.
    Anyway, I just installed the latest bios tonight (1.8) and the reported temps have drooped significantly. Around 27c (idle), which is actually lower than the reported system temp (28c). One of the listed fixes for Bios 1.8 are cpu temps, but can these be correct? They seem very low. Which temps should I believe?
    Thanks

    I'd believe the first bios's temps more than the second...  
    However what temps are given when using speed fan and/or everest? (in windows)

  • Hi, I want to downgrade from OSX LEOPARD to OSX TIGER but I have a few questions regarding this. My iMac is originally from 2007 it came preloaded with tiger. I have original install tiger discs version 10.4.10. Is it safe to downgrade or not please help

    Hi, I want to downgrade from OSX LEOPARD to OSX TIGER but I have a few questions regarding this. My iMac is originally from Sep 2007 it came preloaded with tiger. I have original install (2) tiger discs version 10.4.10.  I want to know if it is safe and what are the necessary steps to do so. Also by downgrading im wondering if a lot of apps nowadays support tiger for example I have photoshop version 5 and 4 these are very important to me. One last question does anyone know of any reliable virus protection for mac that doesnt slow down your computer? because I have read that a lot of them do so. If anyone can help me I would greatly appreciate it! Here are the specs for my iMac 
    Model Name:
    iMac
      Model Identifier:
    iMac7,1
      Processor Name:
    Intel Core 2 Duo
      Processor Speed:
    2 GHz
      Number Of Processors:
    1
      Total Number Of Cores:
    2
      L2 Cache:
    4 MB
      Memory:
    2 GB
      Bus Speed:
    800 MHz

    Most of the time a perception of general slow performance is the result of installing third party junk alleged to speed up, "clean" or "optimize" your Mac, or to look for viruses that don't exist. Ideally you would know what you installed so you can uninstall it, but if you don't know or aren't sure there are techniques such as Safe Mode and creating a temporary user account to confirm that suspicion.
    If you open Activity Monitor it may show a process, or processes, that occupy a lot of your system's time.
    Slowness confined solely to web browser activity is often the result of an inexorable progress toward websites that demand ever more processor-intensive tasks. If your slow performance is strictly limited to web browsing, you might try disabling Flash by either uninstalling it, or use utilities such as ClickToFlash that allow you to control what Flash content gets loaded. Flash in itself is not inherently evil, but there is nothing to stop websites or the advertisers who pay for them from writing horrible Flash code that can do everything from hogging 100% of your CPU's time to causing random crashes. You can watch Activity Monitor as in the above to correlate these troublesome web pages with performance degradation.
    You are correct; if your computer shipped with Tiger you may certainly revert to it. I forgot that Tiger was shipping on new Macs as recently as five years ago. To downgrade it would be necessary to completely erase your hard disk and boot with the Tiger installation DVD, followed by installing it anew. Such drastic measures are not necessary and you are unlikely to be satisfied with the results anyway.
    Assuming your system is free of third party parasitic junk attached to OS X in an ill-conceived attempt to improve upon it, that your hard disk drive is sound and the boot volume has enough free space to work with, by far the best performance-enhancing improvement would be to add more memory. Buy as much as your computer can use and that you can afford. 2 GB is not that much any more.
    Read the following for some recommended troubleshooting techniques from Apple:
    General purpose Mac troubleshooting guide: Isolating issues in Mac OS X
    Creating a temporary user to isolate user-specific problems: Isolating an issue by using another user account
    Memory limitations: Using Activity Monitor to read System Memory and determine how much RAM is being used
    Identifying resource hogs and other tips: Runaway applications can shorten battery runtime
    Starting the computer in "safe mode": Mac OS X: What is Safe Boot, Safe Mode?

  • Questions regarding Optimizing formulas in IP

    Dear all,
    This weekend I had a look at the webinar on Tips and Tricks for Implementing and Optimizing Formulas in IP.
    I’m currently working on an IP-implementation and encounter the following when getting more in-depth.
    I’d appreciate very much if you could comment on the questions below.
    <b>1.)</b> I have a question regarding optimization 3 (slide 43) about Conditions:
    ‘If the condition is equal to the filter restriction, then the condition can be removed’.
    I agree fully on this, but have a question on using the Planning Function (PF) in combination with a query as DataProvider.
    In my query I have a filter in the Characteristic restriction.
    It contains variables on fiscal year, version. These only allow single value entry.
    The DataProvider acts as filter for my PF. So I’d suppose I don’t need a condition for my PF since it is narrowed down on fiscal year and version by my query.
    <b>a.) Question: Is that correct?</b>
    I just one to make sure that I don’t get to many records for my PF as input. <u>How detrimental for performance is it to use conditions anyway?</u>
    <b>2.)</b> I read in training BW370 (IP-training) that a PF is executed for the currently set filter (navigational state) in the query and that characteristics that are used in restricted keyfigures are ignored in the filter.
    So, if I use version in the restr. keyfig it will be ignored.
    <b>Questions:
    a.) Does this mean that the PF is executed for all versions in the system or for the versions that are in the filter of the Characteristic Restrictions and not the currently set filter?</b>
    <b>b.) I’d suppose the dataset for the PF can never be bigger than the initial dataset that is selected by the query, right?
    c.) Is the PF executed anaway against navigational state when I use filtering? If have an example where I filter on field customer thus making my dataset smaller, but executing the PF still takes the same amount of time.
    d.) And I also encounter that the PF is executed twice. A popup comes up showing messages regarding the execution. After pressing OK, it seems the PF runs again...</b>
    <b>3.)</b> If I use variables in my Planning Function I don’t want to fill in the parameter VAR_VALUE with a value. I want to use the variable which is ready for input from the selection screen of the query.
    So when I run the PF it should use the BI-variable. It’s no problem to customize this in the Modeler. But when I go into the frontend the field VAR_VALUE stays empty and needs a value.
    <b>Question:
    a.) What do I enter here? For parameter VAR_NAME I use the variable name, but what do I use for parameter VAR_VALUE?  Also the variable name?</b>
    <b>4.)</b> Question regarding optimization 6 (slide 48) about Formulas on MultiProviders:
    'If the formula is using data of only one InfoProvider but is defined on a MultiProvider, the the complete formual should be moved to the single base InfoProvider'.
    In our case we have three cubes in the MP, two realtime and one normal one. Right now we have one AggrLevel (AL) on op of the MP.
    For one formula I can use one cube so it's better to cretae another AL with the formula based on that cube.
    For another formula I need the two <u>realtime</u> cubes. This is interesting regarding the optimization statement.
    <b>Question:
    a.) Can I use the AL on the MP then or is it better to create a <u>new</u> MP with only these two cubes and create an AL on top of that. And than create the formula on the AL based on the MP with the two cubes?</b>
    This makes the architecture more complex.
    Thanks a lot in advance for your appreciated answers!
    Kind regards, Harjan
    <b></b><b></b>

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Dumb question about user_sdo_geom_metadata DIMINFO entries

    I'm sure that this is a dumb question!
    I create a new entry in user_sdo_geom_metadata as follows...
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES ( 'PR_A', 'GEOM',
    MDSYS.SDO_DIM_ARRAY(
    MDSYS.SDO_DIM_ELEMENT('X',190000.0,640000.0, 0.05),
    MDSYS.SDO_DIM_ELEMENT('Y',120000.0,680000.0, 0.05)
    NULL );
    But when I select the DIMINFO from the table...
    SQL> select diminfo from user_sdo_geom_metadata a where table_name = 'PR_A';
    DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)
    SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', 190000, 640000, 0), SDO_DIM_ELEMENT('Y', 1200
    00, 680000, 0))
    The sdo_dim_element and sdo_tolerance
    elements show no decimal places.
    Is this because I have not set some
    display number format option in SQLPLUS
    or for some other reason?
    regards
    Simon

    Simon,
    If you are using an old sqlplus client
    (815) then you won't be able to see the decimal places in the diminfo object.
    But if you hace a newer (816/817) sqlplus
    client you should be able to see the decimal places.
    If you are not seeing them in these clients
    then there might be some format paramter set
    to show numbers without decimals.
    You can do a
    show numformat
    in sqlplus to see if there is any format set for that paramter.
    null

  • Probably a really dumb question...but so am I

    Just bought a brand new iMac (specs below). Old one? Panther (3.10.9), 1 GHz, 2 RAM, 80 GB space)
    Keeping the old iMac. A perfectly good machine. Only reason I bought a new one: kind of maxed out regarding my primary use of a computer...home music recording. Just needed more space, power, speed, etc.
    Question: Before I call the phone company (our ISP: ATT), want your advice.
    Can I just hook up two computers (both iMacs, one old, one new), on the same telephone line (a land line...this would be the only phone line coming into the house), and have them work? Or, would the ISP, the telephone company, charge us more for doing this? Meaning operating two computers on the same line simultaneously. (Forget about wireless here. Don't use it, not sure that I will anytime soon.)
    Why couldn't I just plug in the other computer into a phone jack in another room in the house and have it work while the other one, in another room but using the same phone line, is running at the same time? Both simultaneously. (Sure, I'd need a modem for each machine going into the ethernet port.)
    As I said, possibly a dumb question. On the other hand, I'm pretty dumb too.
    Thoughts, advice, suggestions? Greatly appreciated.

    It's probably connecting to a broadband modem (DSL or ADSL). That device allows you to get the high speed internet going on using the same phone jack in your house, and connects to the computer through an ethernet cable.
    "Old school" or dial-up modems are traditionally built into the computer or plug into a USB port, and then use a normal phone cord (same thing other phones in the house use). Aside from being super slow, your line is usually busy and you're unable to make/receive calls when you're online.
    I'm guessing you're connecting to a broadband modem (since you mentioned ethernet cable). With that in mind, the answer is yes, you can hook both those computers up so they can be online at the same time. What you need is what's called a router or switch, and they're not very expensive. One line connects to your broadband modem, then you have 3-4 jacks that you can plug additional computers in (so they can all use the connection at once).
    If you want to have the machines in different rooms but don't want to run wires everywhere, look for a wi-fi router, which will usually give you a few jacks for connecting computers that don't have wi-fi but will also let you do a wireless internet hookup for your new iMac, the old one (if it has wi-fi), and even iPhones if you have 'em.
    Apple has two products that might work for you. The Airport Extreme Base Station has super-fast wifi plus jacks for plugging computers in via ethernet, and even lets you share a printer. http://www.apple.com/airportextreme/ The Time Capsule does all that, and also has a built in hard drive (you can get with a 1TB or 2TB hard drive), which is great for using Time Machine. We've had a Time Capsule since they first came out and love it - super easy to set up, and all the backup stuff just happens without having to worry about it.
    Hope that helps!

  • Dumb question about cycling a battery. Sorry!

    I just received by new mac book pro replacement battery today and I have some questions regarding it. First of all, my original battery lasted 2 yrs, 2 mos. I purchased another apple battery for my 15 inch mac book pro (computer originally purch. Aug. 2007). I've seen on these boards that I'm supposed to "cycle" the battery once a week to get the maximum life out of it. I think a cycle means using the battery until it's power is used up and then charge up to 100% again. Is that right? Anyway, my question is, "What is the proper care for this new battery? Does it damage it to have it plugged in when it is fully charged? Or should I always keep it unplugged and use the battery until it runs out and then charge it again? And what does it mean to cycle it once a week, if I'm already using the battery and recharging it when needed?" Sorry if these are dumb questions. I'm not really computer savvy! Thanks.

    You should use the battery for a while every day but not drain it right down, before re-connecting the power and re-charging it to 95%+ this is not a complete cycle but partial/normal cycling.
    _Every couple of months_ you should re-calibrate the battery which means you keep using it until it goes into emergency sleep itself (you will see a warning then 5 minutes later it will go off by itself) and then _leave it in this state for at least 5 hours_ before you re-connect the power supply and fully charge it (without unplugging) - this is a proper re-calibration.

  • Question Regarding MIDI and Sample Accuracy

    Hi,
    I have 2 questions regarding MIDI.
    1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
    2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
    I've looked through the manual, and couldn't find any specific answer to these questions.
    Thanks!
    Message was edited by: Matthew Usnick

    Ok, I've done some experimenting, and here are my results.
    I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
    Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
    I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
    SO.
    Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
    This is very good news, and glad I worked this out!
    (if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
    Message was edited by: Matthew Usnick

  • Question regarding homehub and Open reach router -...

    Hi all,
      I had infinity installed earlier this month and am happy with it so far. I do have a few questions regarding the service and hardware though.
      I run both my BT openreach router and BT Home hub from the same power socket. The problem is, if I turn the plug on so both the Homehub and Openreach Router start up at the same time, the home hub will never get an Internet connection from the router. To solve this I have to turn the BT home hub on first and leave it for a minute, then start the router up and it all works fine. I'm just curious if this is the norm or do I have some faulty hardware?
      Secondly, I appreciate the estimated speed BT quote isn't always accurate, I was quoted 49mbits down but received 38mbits down - Which I was happy with. Recently though it has dropped to 30. I am worried this might continue to drop over time. and as of present I am 20mbits down on the estimate . For the record 30mbits is actually fine and probably more than I would ever need. If I could boost it some how though I would be interested to hear from you.
    Thanks, .

    Just a clarification: the two boxes are the HomeHub (router, black) and the modem (white).  The HomeHub has its own power switch, the modem doesn't.
    There is something wrong if the HomeHub needs to be turned on before the modem.  As others have said, in general best to leave the modem on all the time.  You should be able to connect them up in any order, or together.  (For example, I recently tripped the mains cutout, and when I restored power the modem and HomeHub went on together and everything was ok).
    Check if the router can connect/disconnect from the broadband using the web interface.  Leaving the modem and HomeHub on all the time, go to http://192.168.1.254/ on a browser on a connected computer, and see whether the Connect/Disconnect button works.

  • Question regarding IWDTree and context Value Node naming

    Hi,
    I have a question regarding the IWDTree / IWDTreeNodeType components.
    I have a context looking like this:
    Context
      + ResponseNode
        + PersonNode (1..1)
          + PersonAddressNode                    (empty node, placeholder)
          | + AdresNode (0..n)
          + PersonChildNode                      (empty node, placeholder)
          | + PersonNode (0..n)
          |   + PersonAddressNode                (empty node, placeholder)
          |     + AddressNode (0..n)
          + PersonParentsNode                    (empty node, placeholder)
            + PersonNode (0..n)
              + PersonAddressNode                (empty node, placeholder)
                + AddressNode (0..n)
    The context represents a person, a person's address, and a person's children and parents with their respective addresses.
    As a result, on different branches, a PersonNode and AddressNode can appear.
    And for some strange reason, all PersonNodes and AddressNodes link to the same ResponseNode.PersonNode.PersonParentsNode.PersonNode and ResponseNode.PersonNode.PersonParentsNode.PersonNode.PersonAddressNode.AddressNode respectively, irregardless of their branch...
    Is it illegal to have multiple PersonNode and AddressNode node names, and should they be named uniquely?

    Generally, node names need to be unique inside the context, attributes in different nodes can have same names. I wonder if the context structure you described will result in code without compile errors.
    The WD Tree can only be used with recursive context nodes or with a hierarchy of non-singleton child nodes.
    Can you give an example how your tree should look like at runtime?

  • Question regarding roaming and data usage

    I am currently out of my main country of service, and as such I have a question regarding roaming and data usage.
    I am told that the airplane mode is sufficient from keeping the phone off from roaming, but does this apply to any background data usage for applications and such?
    If the phone is in airplane mode, are all use of the phone including wifi and application use through the wifi outside of all extra charges from roaming?

    Ann154 wrote:
    If you are getting charged to use the wifi, then it is possible.  Otherwise no
    Just to elaborate here, Ann154 is referring to access charges for wifi, which is nothing to do with Verizon, so if you are using it in a plane, hotel, an internet cafe etc that charges for Wifi rather than being free .   Verizon does not charge you (or indeed know about!) wifi usage, or any other usage that is not on their cellular network (such as using a foreign SIM for example in global phones)  So these charges, if any, will not show up on the verizon bill app.  Having it in airplane mode prevents all cellular data traffic so you should be fine

  • Question regarding MM and FI integration

    Hi Experts
    I have a question regarding MM and FI integration
    Is the transaction Key in OMJJ is same as OBYC transaction key?
    If yes, then why canu2019t I see transaction Key BSX in Movement type 101?
    Thanks

    No, they are not the same.  The movement type transaction (OMJJ) links the account key and account modifier to a specific movement types.  Transaction code (OBYC) contains the account assignments for all material document postings, whether they are movement type dependent or not.  Account key BSX is not movement type dependent.  Instead, BSX is dependent on the valuation class of the material, so it won't show in OMJJ.
    thanks,

  • **question regarding 3G and wif**.

    I have a question regarding 3G and wifi. I have #G activated as well as wifi, when I go to retrieve mail for example I get a pop up asking me if I want to connect to a wifi network…should I have wifi and 3G activated at the same time, and why am I getting the pop up…
    Thanks

    You can have them on at the same time, but they will not be used at the same time for data. The order of preference for data is WiFi > 3G > EDGE > GPRS. You're getting the pop up, most likely, because you have Settings > Wi-Fi > Ask to Join Networks set to ON. You can set that to OFF, and the iPhone will still join known (i.e. previously used) WiFi networks automatically.

Maybe you are looking for