Disk size and privacy

Hi,
I have sign up for azure free trial. I created VM windows sever 2008. By default, I got two partition,
C and D with about 100 GB space.
I downloaded about 20-30 GB data using server, and next day someone was using my server (guess microsoft) and disable D drive, and decrease
C drive to 30 GB. And also delete application "onedrive".
Also I couldn't attach again drive D on this machine, status was "locked".. and also on C drive I could't resize 30 GB.
I was logged in on web, with websites like facebook, hotmail.. and someone was accessed to my
VM server. Where is privacy?
If I did overload server with download or something, I could get warning, or at last lower down internet connection.. But not, someone was using my server, or know what I was doing and modify VM server disk and uninstall one or more programs.

Hi Nedims,
I suggest you not to panic at the moment and check the operation logs for any details on Management portal under settings.
On a VM, you get 2 discs:
OS Disk – drive C:
Temporary Storage Disk – drive D:
D:\ drive is temporary drive and prefer not to keep any data. The Temporary Storage disk is used to store temporary information. For example if you need to cache different content like pictures or documents. In the case of the restart or if something happens
with the machine, all the content of this disk is lost. Because of this you should never store data that need to be persisted on this disk. The scope of this disk is not to persist data. It won't get locked out.
Also, as stated by you regarding Privacy: Unless you disclose your VM Name, storage, IP address, User name and password to logon  publicly, no body can access your VM on azure. Data on Azure VM's are highly secured.
You can check the operation logs on Management portal and check what are the recent activities done on your VM.
Ref: http://blog.ict.eu/2013/04/windows-azure-operation-logs/
Also, suggest you to open a support case if you still need further assistance on the data loss or security over Azure VM's.
Hope this helps you.
Girish Prajwal

Similar Messages

  • I keep getting this message, I decided to delete whatever I can to free up some space. Then I get my few GBs I have left on the startup disk. Then after a day, I get the message again. I check the startup disk size and notice I have "zero kb" left. ??????

    I keep getting this message, I decided to delete whatever I can to free up some space. Then I get my few GBs I have left on the startup disk. Then after a day, I get the message again. I check the startup disk size and notice I have "zero kb" left. ??????

    Click your Apple menu icon top left in your screen. From the drop down menu click About This Mac > More Info > Storage
    Make sure there's at least 15% free disk space.
    Restart your Mac after freeing up disk space then check Stoage again.
    Another way to view avialable space:
    Control click the MacintoshHD icon on your Desktop then click Get Info.
    In the Get Info panel you'll see:  Available & Capacity
    Again, make sure there's at least 15% free disk space.

  • Disk size and backups

    Hello
    I wander what is required disk size on hana when migrating from e.g Oracle to Hana?
    E.g under oracle I have 1Tb database (and something more if I backup also at OS level)
    What will I have under Hana ? How much to backup?
    Thank you a lot
    Jan

    Dear Jan,
    The first thing we need to perform before migrating from AnyDb to HANA is memory sizing. SAP provides specific notes for memory sizing of different SAP Applications as mentioned bel     ow
    1793345 - Sizing for SAP Suite on HANA
    1637145 - SAP BW on HANA: Sizing SAP In-Memory Database
    1872170 - Suite on HANA memory sizing
    1736976 - Sizing Report for BW on HANA  
    The objective if the the above is to get the footprint of uncompressed DATA in AnyDB.
    That forms one of the parameter for sizing the HANA DB.
    Based on this sizing information one would use the suitable T-Shirt sized HANA Appliance from the hardware vendor. We then have to apply the
    Traditional the RAM should be atleast 2x(uncompressed_DB / Compression factor) and the Disk space would be at least 3-4xRAM for persisitance layer and 1xRAM for logs.
    The compression factor might be around 4-7 times depending on the type SAP application.
    Regards
    Arshad

  • Maximum recording time based on disk size and ram?

    I read about changing the tempo and beats per measure - and hangtime posts a link re: max. time - thanks for that.
    But, I'm thinking about taking my iBook w/ 1.4 processor and 512 of ram to a gig to record approximately 3 hours straight - is this going to work?
    Am I limited based on free disk space as well, (which I'm not sure how much i have!)
    thanks in advance.

    Well, you read HT's FAQ which tells you that GB 2 will never record for 3 hours straight.
    Anyway I wouldn't use GB to record, since it's not reliable enough, I would use a plain audio editor like Audacity or Soundstudio - they are made for that purpose.
    As for disk space: You need about 10 MB per Minute if you record "CD quality" (which is what GB uses). That adds up just short of 2 Gigabytes for 3 hours. Any Finder window of your hard disk will tell you how much space there is available.
    Processor speed and RAM are not an issue when you're recording one single audio track.

  • MS 6147 max memory and disk sizes

    For the MS 6147 can anybody confirm the max Hard disk size and max memory.  
    It will not recognise the 40Gb HD I am trying to fit, and only recognises half of the 256Mb memeory simm I have fitted.
    The BIOS version is 1.9 but I think that is a 'special' by Packard Bell.  The MSI BIOS download site makes no mention of disk problems rectified right up to V1.8 which is the latest, with the exception of one for the ZX chipset only which addresses EDMA 66 problem.
    Anybody got a definitive answer on this?

    Supports a maximum memory size of 256MB (8M x 8) or
       512MB (16M x 4) registered DIMM only
    how many chips on dimm is what counts with older boards
    go to drive makers web site get jumper settings to limit it to 32gb and try

  • WMV and Disk Size issues

    So I am a pretty avid Encore user and I have come into some issues lately and could use some help.
    Background-
    I filmed a 14 hour conference on SD 16:9 mini dv
    I captured 14 hours with Premiere as .AVI - I edited the segments and exported as .AVI
    I used Media Encoder to convert the files to NTSC Progressive Widescreen High Quality (.m2v)   - Reduced the file size drastically
    I then used Media Encoder to convert the .m2v files to .wmv files - Reducing the conference size to 5.65 GB in total.
    I then imported the .wmv into Encore - my issues begin
    At first, Encore CS4 imported the .wmv files without a problem however the disk size (of 5.65 GB) registered in Encore as around 13 gigs???  Why is that?  The .wmv files only consume 5.65 gb on my harddrive.  Where is this file size issues coming from?
    So then Encore CS4 gets upset that I have exceeded the 8.5 DL Disk size and crashes...
    I reopen the program and try to import my .wmv files again (forgot to save like an idiot).  3 of 8 .wmv files import and then Encore starts giving me decoder errors saying I cannot import the rest of the .wmv files... 
    Can anyone help me with this issue?  Im quite confused by all of this.  Things seemed to work fine (sorta) at first and now Encore is pissed.
    I want to get this 14 hour conference on 1 DL DVD and I thought it would be as simple as getting the files reduced to a size sutable for a 8.5 gb disk.  Why is my way of thinking incorrect?
    Thanks for any help,
    Sam

    ssavery wrote:
    Thanks everyone for your help.
    Im still not giving up....  It's become an obsession at this point.  My uncle does this with kids movies for his children.   He'll download and compress entire seasons of children shows and put them all on one dvd which he then plays in a dvd player for them to watch.  Im currently trying to get ahold of him....
    Thanks for the help
    Sam
    i've done this as well for shows that are never to be seen again for the light of day from the 80s and such... i use VSO Software's "ConvertXtoDVD v4" i ONLY use this for archival purposes of xvid or wmv or stuff that encore would throw fits over. the menus are all mainly default stock stuff. but for these projects i'm not concerned about menus or specific navigation, i just need to get the job done. i can squeeze around 15 hrs of 720x480 on one DL (it compresses the ever living dayligths out of the video... but for most of the source footage... it really doesnt matter at that point, its mostly all VHS archives i use for this program anyway) if you just absolutely HAVE to have a 1 disker, you could check that app out, burn it an see how it looks.
    edited to add: that to really squeeze crap in, you can also use a DVDFab program (any ersion should do... Older ones are cheaper) make a disc image with ConvertX, if yiu have alot f footage it may push it beyond the normal boundary of a dvd-dl and fail the burn. So then you can just import the disc image into DVDFab, and choose it to burn to a DVD-DL, and it may compress it by about 3-7% more to fit it. I would NEVER use this method EVER for a client... But if you are just hell-bent on doing 1 disk. Tries these 2 apps out. It may work out if you can live with the compression.
    if you do try this, I recommend trying this workflow. Open premiere with your first gem captured AVI. set up your chapters how you want them or whatever, then save each chapter or lecture or segment or whatever as it's own AVI. the. Import all those separately into ConvertX and set it up to play one after the other when each segment ends. [i can't confirm this 100%, because i usually drop in already compressed  files... but if for some reason it don't wanna work out... then i would  suggest dropping in the mts files instead] (if say you want a new "movie" for each lecture instead, and have chapters per movie that can be done too... But it's more work, but I can expound later if need be)  To save time on encoding, set up the menu to be the "minimalist" menu. It's strictly text. Then just create sn ISO. if you donthe full thing, I can almost guarantee you'll have to use DVDFab to burn to disc, because it'll probably be about 5-8% overburn.

  • Disk size in Solaris 10

    I have some confusion about disk subsystem in Solaris, i am trying to clarify from this forum.
    I have recently installed Solaris 10 in one SPARC box. After i installed, the format gives the bellow output.
    0 root wm 19491 - 29648 4.88GB (10158/0/0) 10239264
    1 swap wu 0 - 4062 1.95GB (4063/0/0) 4095504
    2 backup wm 0 - 29648 14.25GB (29649/0/0) 29886192
    From the above output, is the size of my disk is 14 GB ?, or the size of my disk is 14+2+5=21 GB ?
    I am trying to learn ZFS, so i want another partition in this disk so that i create ZFS on that partition.
    I have gone to single user mode by using CD. I assumed that, from the above "format" command output, i thought i have 21GB of disk size and 14GB of free space. So i created another partition with 14GB. Now the format command gives bellow output.
    0 root wm 19491 - 29648 4.88GB (10158/0/0) 10239264
    1 swap wu 0 - 4062 1.95GB (4063/0/0) 4095504
    2 backup wm 0 - 29648 14.25GB (29649/0/0) 29886192
    3 reserved wm 0 - 29127 14.00GB (29128/0/0) 29361024
    When i am creating ZFS, it given me a warning that the the partition i have specified is spanned into root partition (first partition), and it mentioned to use "-f" option.
    With "-f", it created successfully.
    If i assume now that the size of my disk is 14GB only then,
    (1) how come two partitions are pointing to the same area in the disk ?
    (2) How come two different filesystems are pointing to the same area ?
    Please anyone clarify my doubts. Thank you.

    Assuming a standard labeled disk it is standrad practice to have section/slice 2 being 'whole disk' for purposes of 'backup'. That would tend to indicate you have a 14GB disk. A prtvtoc /dev/dsk/c?t?d?s2 (change the ?s to the right values) will give a little more on the disk geometry.
    In the display from format column 4 is the start cylinder of the partition and column 5 is the end cylinder. From the first set out output it looks like cyclinders 4063 to 19490 are not allocated
    In the second set you have created a new slice (section 3) that overlaps both sections 0 and 1 - which is generally considered to be bad!

  • Is there a max SATA disk size in OSX 10.4.11 and G4/1.25?

    Hello,
    I am trying to set up a G4/1.25 (2 gig in memory) as a fileserver with a SATA card and 2 x 3 TB seagate disks. This is the setup:
    http://lowendmac.com/ppc/mdd-power-mac-g4-1.25-ghz.html
    http://firmtek.com/seritek/seritek-1v4/
    http://eshop.macsales.com/item/Seagate/ST3000DM001/
    The system is OSX 10.4.11. I am unable to initialize the disks in disk utility. The process starts but then halts and says that it can not continue.
    My question is if there is a max disk size in OSX 10.4.11?
    Any help greatly appriceated.
    Best regards,
    Ingolfur Bruun

    Hello again,
    I tried to work my way around armed with your input and succeeded
    By using a FireWire dock I was able to see the disks in 10.4.11. What I did was to partition the disks with ONE partition in Disk Utility and then format with GUID partitioning scheme instead of the Apple Partition Mapping scheme which I had done before. And as I am using a ATA disk as a startup disk it dosen't matter if the 3 TB disks are not bootable. They will only be used for data, not as system disks.
    You saved my day! Thanks again.
    Best regards,
    Ingolfur

  • Silly question regarding sol 8 containers and disk sizes

    I've got what is probably just about the silliest question, but I can't seem to find an answer whilst searching around for the past couple of hours. Say I have 2 boxen, one is a sol 8 server and the other is a brand new install of sol 10/8. They have the exact same hardware, including disk size. If I want to turn the first box into a solaris 8 container running under the second, how do I reconcile the fact that the disk sizes are the same? The sol 8 box is only using, say, 25-30GB of the 72GB on the disk. Do I have to resize the slice into something smaller to enable it fit into a container on the second server? It would seem that is the case, but I didn't know if there was some magic I was not aware of. I've not done a ufsdump or flash archive before so I don't know if the 'empty' space on the disc will be disregarded, possibly allowing me to squeeze it onto my sol 10 server and allow me to resize it smaller in zfs. This topic isn't touched in all the tutorials I've read, so I assume it's either a completely retarded/braindead concern, or everyone always migrates these boxen onto servers with much more in the way of resources.
    Sorry if I offended anyone with my ignorance ; P

    No. Solaris8/9 containers make use the "branded zones" technology. But it's still the same thing and there's no "disk image", at least not like you might think of for VMware or Xen.
    Now if you want to call a ufsdump or flash archive an "image", then that's fine. But you can see in either case, the free space or minor changes is size are irrelevant. You're just copying files. It's a system image, not a "disk image".
                      Installer Options :
                            Option          Description
                            -a filepath     Location of archive from which to copy system image.
                                            Full flash archive and cpio, gzip compressed cpio,
                                            bzip compressed cpio, and level 0 ufsdump are
                                            supported. Refer to the gzip man page available in the
                                            SUNWsfman package.--
    Darren

  • Restoring the window size and position of Disk Utility

    I took my mac-mini to a memory upgrade this afternoon, and the guy, who was testing the new configuration of my mini, started, and changed the size of the disk utility window. I hate so much if someone tweaks my personal stuff/setting in their image so much, that I could explode! How can I restore it to the default size and position? I cannot find anything about this on the net, and this is the only setting, I could not restore.

    I'm sorry, ignore that Applescript Snippet, use this one instead (make sure to open disk utility before you run it!):
    tell application "System Events"
              tell process "Disk Utility"
                        set theSize to size of window 1
                        set theNiceSize to (item 1 of theSize) & "x" & (item 2 of theSize) as string
      display dialog theNiceSize
              end tell
    end tell

  • My time machine 3 TB HD was encryption enabled and it took forever.  I tried reformattiing, it is online, but, get this Partition map repair failed while adjusting structures to fit current whole disk size.  Any comments appreciated.

    My time machine 3 TB HD was encryption enabled and it took forever.  I tried reformattiing, it is online, but, get this Partition map repair failed while adjusting structures to fit current whole disk size.  Any comments appreciated.

    This issue has been in discussion (actively) since last August here:
    https://discussions.apple.com/thread/4218970?start=0&tstart=0
    After months and months of new reports, it's pretty clear that this is an Apple Mountain Lion problem and one that Apple needs to address.  As one frsutrated user noted :
    >>There is no consistent solution for a user.  Apple has to supply it.  All you can do is submit a bug report to
    >> http://www.apple.com/feedback    
    Please, if you are encountering this problem you will save yourself a lot of wasted time and energey simple by joining me and others in asking Apple to fix this problem: Make a bug report.
    Thanks!

  • HT1198 I shared disk space and my iPhoto library as described in this article. When creating the disk image, I thought I had set aside enough space to allow for growth (50G). I'm running out of space. What's the best way to increase the disk image size?

    I shared disk space and my iPhoto library as described in this article. When creating the disk image, I thought I had set aside enough space to allow for growth (50G). I'm running out of space. What's the best way to increase the disk image size?

    Done. Thank you, Allan.
    The sparse image article you sent a link to needs a little updating (or there's some variability in prompts (no password was required) with my OS and/or Disk Utility version), but it worked.
    Phew! It would have been much more time consuming to use Time Machine to recover all my photos after repartitioning the drive. 

  • Taking photos to disk without losing photo size and quality

    I have tried everything I can think of to create photo cd's and dvd's with my photos downloaded to aperture, but every time I try to burn the photos are decreased dramatically in size and quality. I cannot give the cd's to clients, because the photos are no longer enlargeable. Any ideas please help.

    Welcome
    Go to Help >> Aperture User Manual
    Read: Working with Export Presets page 505
    DLS

  • Disk Utility and DiskWarrior trouble with external hard-drive.

    I got an external hard-drive that doesn't show up on finder. On both Disk Utility and DiskWarrior it keeps mounting and unmounting. Disk Utility when trying to repair, spits this out when it unmounts...
    2011-07-28 22:43:43 -0700: Problems were encountered during repair of the partition map
    2011-07-28 22:43:43 -0700: Error: Some information was unavailable during an internal lookup.
    2011-07-28 22:43:43 -0700: : Some information was unavailable during an internal lookup.
    2011-07-28 22:43:43 -0700: [DUDiskController mountDisk] expecting DUDisk, but got nil
    On DiskWarrior it simply says 'Directory cannot be rebuilt due to disk hardware failure (-36,2747)
    Earlier today it would stay mounted and Disk Utility would spit out...
    Verify and Repair volume “Back-up Hard Drive”
    Checking file systemChecking Journaled HFS Plus volume.
    Checking extents overflow file.
    Checking catalog file.
    Invalid sibling link
    Rebuilding catalog B-tree.
    Invalid node structure
    The volume Back-up Hard Drive could not be repaired.
    Volume repair complete.
    Updating boot support partitions for the volume as required.
    Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.
    while DiskWarrior would try to fix it, but on
    Step 5: Locating directory date...
    Speed reduced by disk malfunction: 339,474, er make that 340,239, er make that 341,327
    and I finally gave up there.
    I try to fix the Invalid sibling link error via Terminal and I get this...
    imac-2:~ Migrated$ fsck_hfs -r /dev/disk2s2
    ** /dev/rdisk2s2 (NO WRITE)
    Can't open /dev/rdisk2s2: Resource busy
    Is there ANY THING I can do to get it to stay mounted and possibly fix this? This is my back-up hard-drive!!

    It's the cord that's being used that decides if it shows up, or if it shows up and then goes away over and over. Using a cord where it stays on in the programs, when I run Disk Utility of the darked out part of the hard-drive, it spits out...
    Verifying and repairing partition map for “MICRONET”
    Checking prerequisites
    Checking the partition list
    Checking for an EFI system partition
    Checking the EFI system partition’s size
    Checking the EFI system partition’s file system
    Checking all HFS data partition loader spaces
    Reviewing boot support loaders
    Checking Core Storage Physical Volume partitions
    Updating Windows boot.ini files as required
    The partition map appears to be OK
    while the part of the disk just below it that's light spits out
    Verifying volume “Back-up Hard Drive”
    Checking file system
    Error: This disk needs to be repaired. Click Repair Disk.
    Verify and Repair volume “Back-up Hard Drive”
    Starting repair tool:
    Checking file system
    Volume repair complete.
    Updating boot support partitions for the volume as required.
    Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.
    Disk Utility stopped repairing “Back-up Hard Drive”: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.
    while DiskWarrior tries to rebuild it and spits out a memory error.

  • Thoughts on Stream-to-Disk Application and Memory Fragmentation

    I've been working on a LabVIEW 8.2 app on Windows NT that performs high-speed streaming to disk of data acquired by PXI modules.  I'm running with the PXI-8186 controller with 1GB of RAM, and a Seagate 5400.2 120GB HD.  My current implementation creates a separate DAQmx task for each DAQ module in the 8-slot chassis.  I was initially trying to provide semaphore-protected Write to Binary File access to a single log file to record the data from each module, but I had problems with this once I reached the upper sampling rates of my 6120's, which is 1MS/sec, 16-bit, 4-channels per board.  With the higher sampling rates, I was not able to 'start off' the file streaming without causing the DaqMX input buffers to reach their limit.  I think this might have to do with the larger initial memory allocations that are required.  I have the distinct impression that making an initial request for a bunch of large memory blocks causes a large initial delay, which doesn't work well with a real-time streaming app.
    In an effort to see if I could improve performance, I tried replacing my reentrant file writing VI with a reentrant VI that flattened each module's data record to string and added it to a named queue.  In a parallel loop on the main VI, I am extracting the elements from that queue and writing the flattened strings to the binary file.  This approach seems to give me better throughput than doing the semaphore-controlled write from each module's data acq task, which makes sense, because each task is able to get back to acquiring the data more quickly.
    I am able to achieve a streaming rate of about 25MB/sec, running 3 6120s at 1MS/sec and two 4472s at 1KS/sec.  I have the program set up where I can run multiple data collections in sequence, i.e. acquire for 5 minutes, stop, restart, acquire for 5 minutes, etc.  This keeps the file sizes to a reasonable limit.  When I run in this mode, I can perform a couple of runs, but at some point the memory in Task Manager starts running away.  I have monitored the memory use of the VIs in the profiler, and do not see any of my VIs increasing their memory requirements.  What I am seeing is that the number of elements in the queue starts creeping up, which is probably what eventually causes failure.
    Because this works for multiple iterations before the memory starts to increase, I am left with only theories as to why it happens, and am looking for suggestions for improvement.
    Here are my theories:
    1) As the streaming process continues, the disk writes are occurring on the inner portion of the disk, resulting in less throughput. If this is what is happening, there is no solution other than a HW upgrade.  But how to tell if this is the reason?
    2) As the program continues to run, lots of memory is being allocated/reallocated/deallocated.  The streaming queue, for instance, is shrinking and growing.  Perhaps memory is being fragmented too much, and it's taking longer to handle the large block sizes.  My block size is 1 second of data, which can be up to a 1Mx4x16-bit array from each 6120's DAQmx task.  I tried added a Request Deallocation VI when each DAQmx VI finishes, and this seemed to help between successive collections.  Before I added the VI, task manager would show about 7MB more memory usage than after the previous data collection.  Now it is running about the same each time (until it starts blowing up).  To complicate matters, each flattened string can be a different size, because I am able to acquire data from each DAQ board at a different rate, so I'm not sure preallocating the queue would even matter.
    3) There is a memory leak in part of the system that I cannot monitor (such as DAQmx).  I would think this would manifest itself from the very first collection, though.
    4) There is some threading/threadlocking relationship that changes over time.
    Does anyone have any other theories, or comments about one of the above theories?  If memory fragmentation appears to be the culprit, how can I collect the garbage in a predictable way?

    It sounds like the write is not keeping up with the read, as you suspect.  Your queues can grow in an unbounded fashion, which will eventually fail.  The root cause is that your disk is not keeping up.  At 24MBytes/sec, you may be pushing the hardware performance line.  However, you are not far off, so there are some things you can do to help.
    Fastest disk performance is achieved if the size of the chunks you write to disk is 65,000 bytes.  This may require you to add some double buffering code.  Note that fastest performance may also mean a 300kbyte chunk size from your data acquisition devices.  You will need to optimize and double buffer as necessary.
    Defragment your disk free space before running.  Unfortunately, the native Windows disk defragmentor only defragments the files, leaving them scattered all over the disk.  Norton's disk utilities do a good job of defragmenting the free space, as well.  There are probably other utilities which also do a good job for this.
    Put a monitor on your queues to check the size and alarm if they get too big.  Use the queue status primitive to get this information.  This can tell you how the queues are growing with time.
    Do you really need to flatten to string?  Unless your data acquisition types are different, use the native data array as the queue element.  You can also use multiple queues for multiple data types.  A flatten to string causes an extra memory copy and costs processing time.
    You can use a single-element queue as a semaphore.  The semaphore VIs are implemented with an old technology which causes a switch to the UI thread every time they are invoked.  This makes them somewhat slow.  A single-element queue does not have this problem.  Only use this if you need to go back to a semaphore model.
    Good luck.  Let us know if we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

Maybe you are looking for

  • Logic Express won't start at all

    Howdy! I just installed the Snow Leopard and downloaded the updated to 10.6.2 right away and then continued installing iLife, iWork and Logic Express 9 - without installing the OS updates yet. After restarting I can't start neither Logic nor GarageBa

  • Can Multiple DOE users access same client

    Dear All, I have a use case where manager and his team have to use the same physical device ( It mean same client in the handheld and also the same application). Manager and all the team members are DOE Users. As far as i know, we will have multiple

  • IOS app touch input not working with Adobe AIR

    I believe the problem here is my touch input but I'm not positive. This is my first iOS application. I'm making the game for both Android and iOS and it works perfect on Android, however when I sent it out ot my iPhone testers, they were unable to ge

  • Etherchannel - Config Question

    First time configuring etherchannel.  I have followed the documentation, watched videos, etc.  The channel is up, but wanted to verify I did it right - and have not missed something. Scenario: Connecting a brand new 3650X into a 3750.  The 3750 is th

  • Really basic help needed

    I've never done any web work other than look at it. (and buy too much stuff) Now I'd like to set up a family web site (pictures and announcements) on the "free" site available through my provider. I have no idea how to do it; no concept of how a web