WIll using a multiprocessor system improve DSC Tag Engine performance?

We are developing a multiple workstation vacuum chamber automation control application using the DSC.
The chambers under control each have a set of process controllers (Opto22 "Ultimate Brains") running the fundamental interlock and process mechanisms via their own software. The brains are set up for communication via OPC, thus LabVIEW can monitor the IO states of the system as well as variable values in the brain software via DSC tags. In addition, LV can manipulate variables to make requests that the brain software branch to different subroutines. The other ("control") workstations in the system pass requests to the brains via the software on the monitoring workstation, so as to ensure that requests are enqueued properly.
The problem is at this point there are 1300 tags configured for the DSC, and the workstation responsible for monitoring them shows near 100% CPU load all the time; most of that taken by the DSC Engine. This is with only half of the final project's chambers installed and active. As a result, it sometimes takes several attempts for a control workstation to successfully pass a request to the brains via the monitoring workstation.
We are concerned that performance will only worsen as we bring the additional chambers online.
Would adding a second processor to the workstation improve performance? If dual processors would help, would additional processors help more?
Note: we are examining which tags we monitor all the time and are going to try to reduce that list to those tags critical for normal operation, with an option to temporarily expand monitoring to the larger list for debugging purposes. I am concerned that even if that helps now, the problem will get worse again as we bring additional components on line. Is it the sheer number of tags defined for the DSC engine that gates the load on the engine, or the number that we are actively reading with our program?
Thanks for any illumination you can offer.
Kevin R
Kevin Roche
Advisory Engineer/Scientist
Spintronics and Magnetoelectronics group
IBM Research Almaden

I have a partial answer. We've swapped in the dual processor machine and see some improvement. The processor load was still hovering around 100%, though.
More importantly, we think we've learned something about how the DSC engine is actually working. The monitoring workstation not only runs the DSC engine to trade data with the other workstations, but an OPC server to handle transactions with the "brains". So any requests for data from the brains really are routed via the monitoring workstation.
We had built one common tag database because we thought that would simplify programming. We did some tests today, however, and discovered that if we stop the tag engines on the control workstations, processor load drops dramatically on the monitoring workstation.
What we've realized is that apparently if a read tag exists in a machine's database, the DSC fetches its value, regardless of whether our LabVIEW software ever actually uses the value. We deleted most of the brain tags from the control workstation databases, leaving only the LV memory tags and the few brain tags actually used by our vis. So now the monitoring workstation is not being asked to query those 1000 tags by 3 different tag engines, only by the one using it.
CPU load is down to about 73% now (because the monitoring workstation is still itself watching those 1000 tags). That's still high, but we have a better idea what is going on.
So -- is there any way to have the DSC engine only fetch a tag value when you really need it, rather than always fetching every tag in the database?
Kevin Roche
Advisory Engineer/Scientist
Spintronics and Magnetoelectronics group
IBM Research Almaden

Similar Messages

  • WILL using JDBC API instead of LookUp APIs  hamper performance?????????????

    hi forum experts,
    i have done a JDBC call in a UDF in a message mapping to fetch some records from a database,
    i tried doing it with LookUp APIs,  but i think/found that LookUp APIs dont provide
    any method to execute DML statements, and moreover these APIs dont provide transactional behaviour....
    so i tried with JDBC APIs.......<<<<java.sql.*;>>>>.....where i used custom code to provide transactional behaviour........like explicitly using commit() methods. Where i m using Connection class...and in this case XI communication channels are not being used,,,....
    will using JDBC API instead of LookUp APIs hamper performance????????????

    Hi Sudeep,
    This will surely help u,
    /people/saravanakumar.kuppusamy2/blog/2005/01/19/rdbms-system-integration-using-xi-30-jdbc-senderreceiver-adapter
    /people/william.li/blog/2007/03/30/using-jdbc-connection-pool-in-xi-message-mapping
    According to help.sap
    Use of Lookup API - Calls to other application systems are sometimes necessary to meet the following requirements:
    &#9679;     To get <b>read </b>access to application system data in the mapping program
    &#9679;     To call existing mapping routines in the application system
    So DML statements couldnt be used
    Additional help
    /people/prasad.illapani/blog/2006/10/25/how-to-check-jdbc-sql-query-syntax-and-verify-the-query-results-inside-a-user-defined-function-of-the-lookup-api
    <i>[Reward if helpful]</i>
    Regards,
    Prateek

  • DSC TAG ENGINE doesn't run after using eval copy of LabView

    I used an eval copy of LabView.
    I uninstalled this eval copy and I bought and installed the full NI Developer Suite.
    Now Labview 6i (6.0) works properly but I'm unable to use DSC because the Tag Engine stops immediately with this message: "This evaluation copy of Labview has expired. Engine will close".
    I'm using a Pentium III PC 128Mb RAM Win98 italian version.

    Hi Silvano,
    I found a similar case from another customer. Nationl Instruments is aware of this issue. But the only workaround for now would be (propre version):
    1) uninstall LabVIEW DSC and LabVIEW
    2) delete the registry key from the old LabVIEW eval version with regedit.exe.
    [HKEY_LOCAL_MACHINE\Software\National Instruments\LabVIEW\lvedid]
    3) reinstall LabVIEW and LabVIEW DSC
    (short version)
    1) Rename lvedid in the:
    [HKEY_LOCAL_MACHINE\Software\National Instruments\LabVIEW\lvedid] to e.g. ..\LabVIEW\1lvedid
    Hope this helps
    Roland

  • DSC "tag engine" "unable to start the citadel 5 service" "unable to start the service"

    Hi, I don't know why, but I'm not able to start the
    tag engine anymore. First I thought of a corrupt *.scf-
    file, but this is not the case.
    To investigate the problem I wrote a nice program,
    perhaps it is of interest. And perhaps someone else
    had the problem and was able to find the reason for the
    problem. I also included screen shots of the error messages.
    I'm working with LV 7.1.1 (but the problem was there
    before upgrading from 7.1 to 7.1.1). I saved the example
    code also in previous versions of LV.
    Regards, Stefan
    Attachments:
    could not start tag engine.zip ‏1093 KB

    Hi, I asked the support to help me, but they were not able
    to help, they suggested a new installation.
    To avoid that, I tried to reinstall (repair) only the Datalogging and Supervisory Control (DSC) Module and afterwards I mass compiled the whole NI directory.
    For the moment, the problem is solved. Regards, Stefan

  • LV7.1 DSC tag engine VS LV8.6 DSC shared variables

    I'm currently running LV7.1 with DSC and RT. To handle communications and logging RT variables I'm using the init / read / write publish.vi's on the RT side and datasockets on the HMI side (Windows XP). This has worked out great - new tags can be programmatically created in real time with the publsih vi's and then I go to the the .scf file and use the tag configuration wizard to add them to my scf file and handle data logging. This worked very well - the wizard would organize all of the memory tags into folders by block name used by the init publish vi. I could also select entire groups of tags and add hundreds at a time to the .scf file. Hardware Tag also worked in a similar fashion, organizing tags by controller and module folders. Now - looking at LV8.6.I found I can still use the init / read / publish vi's on the RT side - great. However there is not tag configuration editor as in LV7.1 to let me add large numbers of tags through a wizard. The closest thing I've found is to create a library to represent each block name from the RT init publish.vi then use "create bound variables" option under the library to bind the new shared variables to the RT memory tags. I can browse to the tags on the controller by network items, but when I add them it doesn't bring the block name of the tag as it did in 7.1, only the item name. I use a lot of PID loops that share the same tag names (i.e.: P,I,D, mode, output), so not including the block name represents an organizational problem. The problem with this is, it's very labor intensive compared to the wizard in LV7.1 DSC, especially talking about creating systems with thousands of RT memory tags. Also, there is a similar problem with hardware channels (I'm using compact FieldPoint). To log channels via DSC do I have to create a shared variable for each channel to access the DSC logging capabilities? Again how do I add all of the hardware channels in some organized fashion? I hope I'm missing some tool that is an analog to the tag configuration wizard to bring in these channels and organize them. Any help or suggestions would be appreciated. Thanks,Brad

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • Will using for loop decrease the performance

    Hi,
    Will using for loop with a query decrease the performance.
    for r_row in (select * from table) Loop
    end loop.
    This is done inside another for loop, most of the cases it returns only one value.
    will it decrease the peformance of the procedure.
    kindly advice.......
    Regards,
    Balu

    user575682 wrote:
    Will using for loop with a query decrease the performance.
    for r_row in (select * from table) Loop
    end loop.
    This is done inside another for loop, most of the cases it returns only one value.
    will it decrease the peformance of the procedure.Perhaps it is better to understand just what this PL/SQL loop construct does.
    PL/SQL is two languages. It is PL (programming logic code) like Pascal or C or Java. You can use a 2nd language inside it called SQL. The PL engine is clever enough to recognise when the 2nd language is used. And it compiles all the stuff that is needed for the PL engine to call the SQL engine, pass data to the SQL engine and get data back, etc. (compare this with the complexity of using the SQL language in Pascal or C or Java).
    So what does that loop do? The PL engine recognises the SQL SELECT statement. It creates an implicit cursor by calling the SQL engine to parse it (hopefully a soft parse) and then execute it.
    As part of the PL loop, the PL engine now calls the SQL engine to fetch the data (rows) from the cursor. With 10g and later, the PL engine is smart enough to use implicit bulk processing.
    Prior to 10g it used to fetch a row from the SQL engine, do the loop, fetch the next row, do the loop, etc. This means if there is a 1000 rows to fetch, it will call the SQL engine a 1000 times.
    With 10g and later it will fetch a 100 rows, store that in an internal buffer and then do the loop a 100 times. With a 1000 rows to fetch, it now only requires 10 bulk fetches instead of a 1000 single row fetches.
    These fetches require a context switch - as the PL engine has to step out and into the SQL engine and back, to fetch a row. This is an overhead and thus can become slow the more context switching there is.
    And this is the basics for this loop (and most other cursor loops) construct in PL/SQL.
    The ideal is to reduce the number of context switches. This is an overhead that can impact on performance.
    What about using a loop within a loop. Also "bad". This uses the outer loop to fetch data. This data is then used to drive the fetch in the inner or nested loop. So the outside loop pulls data from the SQL engine into PL variables. The inside loop pushes that very same data back to the SQL engine.
    Why? It would have been a lot faster no to pull and push that data between the loops using PL.
    It will be a lot faster doing it via SQL only. Write both loops as a single SQL statement and have the SQL engine directly drive these loops itself. This is called a JOIN in SQL. And the SQL engine can do it not only faster, but it has some froody algorithms that can be used that are even faster than a nested loop process (called merge joins, hash joins, etc).
    Bottom line. Maximise SQL. Minimise PL.*
    Do as much of your data crunching in SQL as possible. SQL is the best and fastest "place" to process data. Not PL (or Pascal/C/Java).

  • HT2500 Using Mail in System X 5.8 It will not accept attachments (photos, pdf files)

    Using Mail in System X 5.8 It will not accept attachments (photos, pdf files) to send. It was working just fine and after some period of non-use it just stopped accepting attachments to send. The system is all up to date. And, I have the same problem on two different Imacs.

    I had exactly the same problem as arthur1234. I followed his own suggested fix in his post of 10/12/2012 at 9:31 AM.
    His fix worked perfectly. Attachments are now being sent and received as they have been for the last 10 years or so.

  • What is the best way to detect loss of OPC Server connection when using DSC Tags?

    I'm using the DSC Module on a new project and I'm pretty impressed so far. The HMI Wizard has saved me quite a bit of time.
    My application is configured where the DSC Tags are connected to remote OPC Server Tags.
    The issue I'm having is that I cannot detect a loss of the OPC Server when the application is running. Read's of the front panel controls/indicators still return values and the little "connection" icon next them is still green. Even if the connection icon turned red it wouldn't help since the Front Panel is not visible when the main application is running. It is a Sub-VI that's in charge of OPC Data Interfacing. The rest of the application uses the data from the OPC Sub-
    VI.
    I cannot effect a change on the OPC Servers, so I need a method of detection when the Server is lost on my end.
    Any ideas on the best way to do this?
    Thanks,
    Jim

    Hi Jim,
    Ideally, error-reporting and -handling should be the way to handle this. However, if errors are not reported/handled as is sometimes the case with OPC, a quick-n-dirty way to do this would be to check for a "heartbeat" signal from your OPC Server. This could be a boolean tag which toggles On and Off (or a counter ticking). You then read this Tag in DSC in a slow loop using the Read Tag VI (not the front-panel control). And keep track of the Changed? output from this Read Tag VI.
    As long as the 'Changed?' output is true, you are receieving data from the OPC Server, and hence it's alive. You may add some deadband logic to wait for a specific period of time before declaring the Server's really dead!
    Hope this helps,
    K
    halid

  • If you use Secunia PSI, System Update will be displayed as insecure

    I use Secunia PSI to check the currency of my installed programs. Within the past few weeks Secunia PSI is reporting ThinkVantage System Update as Insecure. 
    If you have V3.14 installed on XP or Vista, you have the latest release and your system is not insecure.
    If you have V4 installed on Windows 7, your system is not insecure. 
    It appears that Secunia PSI is lifting the kernel version, which is currently version 3.0 (this is what it should be). 
    Secunia PSI offers a download solution and if you click on it, you will get version 3.14.  Do not download the Secunia PSI solution if you have Windows 7 (V 3.14, which will not work on W7).
    After downloading the solution offered by Secunia PSI for XP and Vista, a rescan will still show your system as insecure.                  Ignore it..... your system is secure.      Secunia is aware of the problem.
    T23: 2647-8RU, Ubuntu 12.04 LTS
    A61E: 6418-12U, W7/Pro 64
    X200: 7454-CTO, W7/Pro 32

    Hi,
    thanks for the post. It will surelly help to other users.
    Just one note:
    Multiple "scaning" tools are detecting TVSU and other applications as unsecure, because they detect, that this application is making a lot of times a internet access and this is sometimes considered as "unsecure".
    I would also advice in this case to ignore it.
    Thanks again.
    Cheers

  • HT2602 If i download mountain lion will all user accounts use this operating system automatically?

    If i download mountain lion will all user accounts use this operating system automatically?  Or do I need to download it for each user... Its taking ages to download too.

    Welcome to the Apple Support Communities
    When you upgrade the operating system, all users are upgraded to be used with the new version, so you won't have to upgrade OS X each time for all your users

  • I am an online student and the school uses the cloud system, however, the ipad will not allow me to post a reply, anyone got an answer as to what I need to do to change this?  Yes, the icloud is set up on my iPad.  It is frustrating, that I cant do this.

    I cannot respond to my online school who uses the cloud system and my cloud on my ipad is turned on and I have the proper IOS.  What is the problem?

    Unfortunately yes, though technology would never move on if it kept to older standards and did not strive to do more.
    The app store always gives you the option to use a previous version of the app IF it is available for your device.
    Sorry but you are stuck without new hardware.
    PJRS

  • Will the fm tuner work while I use the nikeplus system?

    Anyone have any knowledge on this?
    Also, It would be cool if you could switch to video while using the nikeplus system and not interrupt run.

    The answer is "sort of."
    If you turn on the FM radio first, THEN go to Nike+, select "Now Playing" and start your run, you can listen to the radio during your workout and even switch between preprogrammed radio stations. I did a 14 mile run this morning and listened to the radio the whole time.
    However, it's an awkward workaround, and my Nano crashed a few times while I was doing it. Plus, you can't switch between radio and your iTunes music, which it seems like you should be able to do.
    I love listening to radio while I run and bought the 5G Nano for this purpose. It's disappointing that Apple didn't do a better job of integrating the FM radio into their Nike+ system. All they need to do is add a radio function to the Nike+ music menu. Come on guys, it shouldn't be this hard.

  • What are the pros and cons of using a port system.

    Hello All,
    I'm a new explorer in the OS X UNIX world, and have installed macports, and, for the most part, succeeded in building and using a number of scientific applications.
    I have noted a somewhat negative attitude by some on the use of port systems, while others seem quite content with them.
    When making my decision to use macports, these "selling points" seemed desirable to me:
    ¤ Confines ported software to a private “sandbox” that keeps it from intermingling with your operating system and its vendor-supplied software to prevent them from becoming corrupted.
    ¤ Allows you to create pre-compiled binary installers of ported applications to quickly install software on remote computers without compiling from source code.
    Especially the first point seems valuable, but am I deluding myself? Or, am I losing functionality/ flexibility? Or, am I just missing out on manually installing lots of dependents?
    _I'm not trying to start a feud, here._
    I'm just looking for some pointers (preferably well-substantiated) from those more knowledgeable than me, before I am any further committed to a choice I might later regret.
    Thanks,
    PWK

    The biggest drawback/complaint I have is that you're bound by the implementation/installation policy of whoever built the port.
    For example, take the installation issue - all software gets installed into some specific directory which is great one one hand - fewer compatibility issues with conflicting versions from what Apple provide. The downside, though, is that nothing on your machine will use these versions unless/until you tweak them.
    For example, maybe you want to install the MacPorts version of PHP, great, but the standard Apache won't use that, so you either need to install the MacPorts version of Apache, or tweak your Apache installation to use the non-standard PHP version.
    Well, what about PATHs, I hear you ask? well, sure, you could prepend the MacPorts/fink/whatever directory to your $PATH, but then you always use the MacPorts/fink/whatever version of any installed software which might not be what you want.
    This becomes more of an issue in a multi-server environment where you have multiple systems that all need tweaking/maintaining - nothing worse than setting up a new server by copying an existing installation, only to find that it depends on MacPorts/fink/whatever being installed.
    The corollary to this is that these package managers often install ancillary software that you do not need, nor want. It might have improved since I last looked, but installing either MacPorts or fink, for example, installs whole new versions of perl, GNU tools (gzip/gunzip, etc.), curl, and more - they even install new copies of openssl/ssh.
    I don't want these. These already exist on my system so what are they needed for? Why can't they use the standard copies? Are they 'tweaked' in some way? How? why?
    The secondary issue is that you are limited to the port's implementation - especially compile options - which may not be ideal for your machine.
    Unlike most GUI-based software, much open-source software uses compile-time options to configure the executable. Now the port installer might do a reasonable job of tweaking the installation, but it's not psychic so there will be cases where you end up with sub-optimal installations. Sure, they might work well enough, but that doesn't beat knowing what those options are up-front and building your own.
    Now there have been cases where I've tried to install software and almost given up when faced with a daunting list of dependencies (try RRD, or PHP w/ GD, for example) and have almost given up, but when you succeed the satisfaction of getting it working, plus the fact you now know how to do it counts for a lot.
    Now, do I wish that Apple would do a better job of keeping up with the latest versions of the open source software they include in Mac OS X? absolutely - isn't that what Software Update is all about??). But I also wish the port maintainers would spend more of their time updating the original source configure and make scripts to incorporate whatever changes they're making to the code so that Mac users can easily use the same source distribution as other UNIX flavors.
    And right there is the final rub IMHO - all the while the port managers create their distributions of common tools Mac OS X is treated like a poor step-child that's kept in the cellar. OK, maybe not that bad, but there's no reason why anyone who wants to install open source software on a Mac should need much more than:
    (download>
    ./configure
    make
    sudo make install
    it really isn't all that hard. Too often the port managers perpetuate the myth that Mac OS X is too different from other UNIX systems to work with the standard tools that everyone else knows.
    Now, maybe I'm also too old for this game since you always downloaded and built tools yourself when I started, and maybe package managers on Linux (which may have the same issues I've complained about) have helped elevate Linux in the mindset of a younger generation who are looking for a quick fix. All I can say to that is…
    GET OFF MY LAWN! :-D

  • Recommended system improvements after GTX470 died

    After 4 years of using my system with no major issues and only minor tweaks, I started to get several BSOD errors.  I eventually realised that my ASUS GTX 470 was dead.  This is never a good thing, however I had already been thinking that it was time to investigate some upgrades and I'd very much appreciate the thoughts and wisdom of the forum to work out which path to take.
    As well as editing HD DSLR footage I am now editing 4K footage from the Panasonic GH4.  I also work in After Effects and plan to do more in Cinema 4D.  The PC is also used for Photoshop and graphic design tasks.
    I currently have:
    MB: Asus P6X58D-E
    CPU: Intel Core i7 930 2.8GHz
    CPU FAN: Noctua NH-D14 Dual Radiator CPU Cooler
    RAM: 12GB (6 x 2GB) Corsair Dominator DDR
    Graphics: NOW DEAD BUT DID HAVE - 1.28GB Asus GTX 470 3.3GHz
    Case: Coolermaster ATCS 840
    OS (Win 7 64bit)
    WD Velociraptor
    300GB
    10,000rpm
    Storage
    WD Black
    2TB
    7,200rpm
    Media
    WD Black
    2TB
    7,200rpm
    Project/Source Video
    WD Black
    1TB
    7,200rpm
    Media Cache/Page File/Export
    Samsung SSD EVO 840
    120GB
    Drive Bay (back-up and archive)
    Various bare drives
    Back-ups
    4 Bay Silverstone USB3
    4 x 3TB WD Red
    I also have a Samsung EVO 840 250GB drive which I plan to use as the System drive, but I’ve just not gotten around to installing everything on it yet.
    My motherboard has 6 x 3Gb/s and 2 x 6Gb/s SATA ports and has an onboard RAID, using an Intel ICH10R Southbridge controller.  I have a DVD drive using one of the SATA ports.
    So the main questions are:
    Which graphics card should I get to replace the GTX470?
    What other upgrades should I be looking at to improve performance?
    Your feedback would be very much appreciated.

    that website shows other gtx 960's from evga and msi available if needed. if you want to spend more on a gtx 970 it is reasonably priced for its performance over the gtx 960, unlike the gtx 980.
    intel has been switching cpu sockets constantly lately, so its the norm when upgrading an intel cpu it will require a new motherboard. that is the situation you are in with that motherboard. so when you decide to upgrade the cpu, you will be looking at new cpu, motherboard, and possibly even new ddr4 ram.
    if you are looking for smooth editing it depends on the media codecs being used and how many fx are being applied. premiere is pretty efficient for video playback on the timeline with a good video card and fast hdds. you can add the video card and hdd raid upgrades now and see how the system works for you, then decide if and when you want to upgrade the cpu/mbd/ram.

  • Firefox seems to be my only browser that puts ads on my Facebook page. How can I stop this? I love Firefox but hate the ads and will use another browser until it's solved.

    Whenever I open Facebook through Firefox I get pop up ads and such. I do not seem to get this when using other browsers like Safari or IE. It's sooo annoying!!! I will use another browser until it's solved.

    Since you have encountered the problem at other sites, not just Firefox, I'd bet that it is an Internet Explorer problem, not the web sites. Actually, I am sure it is an IE problem.
    Try running Windows Update, perhaps there are some updates for Internet Explorer that might resolve the problem. If you don't know how to run Windows Update, use the Help on your system to look for information on Windows Update.
    If that does not help, use Google to search for information on IE9 locking up when downloading.
    Stan

Maybe you are looking for

  • How to create a PDF page that displays in Email and contains links to a page in the full PDF.

    Mac, CS 5 I was able to create the PDF page and activate links to pages in the full PDF document. But when I test-email both files to myself, the links do not work.

  • Two classes have the same XML type name??? Please, help...

    Hello! I have a simple web service class. I generated the ws server-side classes using wsgen. I created a jar file from the generated code. Then : Endpoint endPoint = Endpoint.create( new WebServicesUnitImpl() String wsFinalName = "http://localhost:9

  • User to User file transfer via FMS or RED5 in AIR application.

    Hello! It is possible to make file transfer from air application via server? I think need to create like byte array streaming or somethink like this from sender and write the destination file on receiver user pc. Someone know how to do this? It is po

  • Import GP transport package

    Hi, I am trying to import the GP .sda file which was successfully imported during the first import. Somehow the files got corrupted. so i deleted the GP folder from design time .. undeployed and deployed again via SDM but when i try to release the GP

  • MacBook Pro 17" 2.33 GHz Memory Upgrade

    So, like most folks who purchased the 1st Gen 17" MacBook Pro, I bought mine with the stock 2GB of RAM. I'd like to upgrade it to add additional RAM, but I'm somewhat confused by what I can and cannot do. Based on my research, it looks like my notebo