Define Block Blob size in GB

Hi all,
I have used the following code to define the block blob size in MB and then download this file. Its working fine.
protected void btn_download_Click1(object sender, EventArgs e)
    Button btndownloadrow = (Button)sender;
    GridViewRow row = (GridViewRow)btndownloadrow.NamingContainer;
    Label lblfilename = (Label)row.FindControl("lblGrid_filename");
    string downloadfile = lblfilename.Text.ToString();
    AccountFileTransfer = CloudStorageAccount.Parse("DefaultEndpointsProtocol=http;AccountName=" + ACCOUNTNAME + ";AccountKey=" + ACCOUNTKEY);
    if (AccountFileTransfer != null)
        BlobClientFileTransfer = AccountFileTransfer.CreateCloudBlobClient();
        ContainerFileTransfer = BlobClientFileTransfer.GetContainerReference(CONTAINER);
        ContainerFileTransfer.CreateIfNotExist();
    var blob = ContainerFileTransfer.GetBlockBlobReference(downloadfile);
    var sasUrl = blob.Uri.AbsoluteUri;
    CloudBlockBlob blockBlob = new CloudBlockBlob(sasUrl);
   var blobSize = 551* 1024 * 1024; // Block blob size of 551 MB
   int blockSize = 1024 * 1024 * 1; //  chunk of size 1 MB
    Response.Clear();
    Response.ContentType = "APPLICATION/OCTET-STREAM";
    System.String disHeader = "Attachment; Filename=\"" + blockBlob.Name + "\"";
    Response.AppendHeader("Content-Disposition", disHeader);
    for (long offset = 0; offset < blobSize; offset += blockSize)
        using (var blobStream = blockBlob.OpenRead())
            if ((offset + blockSize) > blobSize)
                blockSize = (int)(blobSize - offset);
            byte[] buffer = new byte[blockSize];
            blobStream.Read(buffer, 0, buffer.Length);
            Response.BinaryWrite(buffer);
            Response.Flush();
    Response.End();
The problem which I am facing is that when I tried to define the block blob size in GB I am getting overflow error. I am trying to download a file of size around 3 gb. I am using this:-
  var blobSize = 3558 * 1024 * 1024; // trying to define the block blob size of around 3 GB here I am getting overflow error
Could you please help me so that I can define the block blob size in GBs so that I can download the file from azure using block blob storage.
Thanks.

Hi,
Thanks for sharing the solution about how to avoid overflow error, it will be very beneficial for other community members who have similar questions. If you have any difficulty in future programming, we welcome you to post in forums again.
Best Regards,
Jambor
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • How to modify the blob size, or how to set the size?

    i want to know how to modify the blob size, or how to set the size?
    what's the default size of blob?
    Thanks in advance.

    Blob datatype can contain binary data with a maximum size of 4 GB.
    when you enter 10kb file, the database will only use 10kb to store the file (depending on block size etc)
    if you want to modify the blob size, you may do like this:
    SQL> create materialized view t_mv refresh fast on commit
    2 as select id, dbms_lob.getlength(x) len from t;
    Materialized view created.
    SQL> alter table t_mv add constraint t_mv_chk check (len < 100);
    Table altered.

  • Large Block Chunk Size for LOB column

    Oracle 10.2.0.4:
    We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
    1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
    2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
    3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
    [LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
    Below is the output of v$db_cache_advice:
    select
       size_for_estimate          c1,
       buffers_for_estimate       c2,
       estd_physical_read_factor  c3,
       estd_physical_reads        c4
    from
       v$db_cache_advice
    where
       name = 'DEFAULT'
    and
       block_size  = (SELECT value FROM V$PARAMETER
                       WHERE name = 'db_block_size')
    and
       advice_status = 'ON';
    C1     C2     C3     C4     
    2976     368094     1.2674     150044215     
    5952     736188     1.2187     144285802     
    8928     1104282     1.1708     138613622     
    11904     1472376     1.1299     133765577     
    14880     1840470     1.1055     130874818     
    17856     2208564     1.0727     126997426     
    20832     2576658     1.0443     123639740     
    23808     2944752     1.0293     121862048     
    26784     3312846     1.0152     120188605     
    29760     3680940     1.0007     118468561     
    29840     3690835     1     118389208     
    32736     4049034     0.9757     115507989     
    35712     4417128     0.93     110102568     
    38688     4785222     0.9062     107284008     
    41664     5153316     0.8956     106034369     
    44640     5521410     0.89     105369366     
    47616     5889504     0.8857     104854255     
    50592     6257598     0.8806     104258584     
    53568     6625692     0.8717     103198830     
    56544     6993786     0.8545     101157883     
    59520     7361880     0.8293     98180125     

    With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
    Each LOB column has its own LOB table so each column can have its own LOB chunk size.
    The LOB data type is not known for being space efficient.
    There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
    HTH -- Mark D Powell --

  • Toplink with mysql: problem with blob size

    I'm using toplink with a mysql database. I want to store some data in a blob.
    In mysql there exist different sizes of blobs (in my case I need a mediumblob).
    But if I create the scheme for the database with jpa/toplink it alway creates only a column with type blob.
    I can explicitly tell the database to use a mediumblob by this:
    @Column("MEDIUMBLOB")
    But by doing this I limit my program to mysql of course, as this data type is not known to other databases.
    Does anybody know a more elegant solution to setting the blob size?
    for example with hibernate it can be done this way:
    @Column(length=666666)

    Looks like you are using JPA, and in JPA you would set the columnDefinition to the type that you want, e.g.
    @Lob
    @Column(name="BLOBCOL", columnDefinition="MEDIUMBLOB")
    byte[] myByteData;
    As you mentioned, this does introduce a dependency on the database. However, you can always either put the Column metadata in XML, or override it with something else later in XML.
    The length attribute was intended and specified for use with Strings but I guess there is no reason why it couldn't be used for BLOBs as well (please enter an enhancement request, or submit the code to add the feature to EclipseLink). Be aware, though, that doing so at this stage is going to be introducing a dependency on the provider to support a non-spec defined feature.

  • HP Officejet Pro 8600 Plus e-All-in-One Printer - N911g can not define a custom size paper.

    HP Officejet Pro 8600 Plus e-All-in-One Printer - N911g can not define a custom size paper.  I would like to have a custom size to work with a card making program.

    Can you help me out?
    I have an HP Officejet Pro 8600 N911a
    When I try to print to a custom print size (2.75" x 6" in my case), I get the same error everyone else is reporting: "Paper detected does not match paper size or type selected. Make sure the paper size or type is correct to continue the job."
    I am currently trying to print using Microsoft Word for Mac 2011 version 14.4.5, but I have the same problem with printing from Adobe Reader 9.8.5 or Preview 8.0.
    I am running OS X Yosemite 10.10.
    My printer's firmware is "up to date" as of 28-Aug-2014.
    This is what I did:
    1. Open System Preferences from the Apple menu.
    2. Click on the Printers & Scanners icon.
    3. Click on the + sign under the list of printers to add a printer
    4. Click on IP
    5. Enter my IP address in the Address field (verified as "valid and complete host name or address")
    6. Ignore Protocol field (default is Line Printer Daemon - LPD)
    7. Ignore Queue field (default is blank for default queue)
    8. Ignore Name field (default is IP address)
    9. Ignore Location field (default is blank)
    10. Under the Use dropdown menu, I select "Select Software"
    11. HP Deskjet 9800 is not an option.
    I searched for 9800, but there is no HP printer with that number in the options.
    I tried selecting "HP DeskJet 980C - Gutenprint v5.2.3" to see if that would work, but I received the same error message as above. I tried selecting other random HP printers, but so far I've had no luck.
    I've been googling about for a way to manually add a printer driver to this "Select Software" list, but I've found nothing (apparently, no one else wants to do this). I did not find a driver download for the HP DeskJet 9800 for OS X Yosemite 10.10 on the Drivers & Software section of the hp.com website.
    Additional details on settings for Microsoft Word for Mac 2011 version 14.4.5:
    Selected File > Page Setup from menu
    Under the Paper Size drop down menu, I selected manage custom size.
    - Click + to add a paper size
    - Enter "2.75" in width and "6 in" height in the paper size fields.
    - Enter "0 in" as the User Defined Non-Printable Area for top, left, right and bottom.
    - Double-clicked Untiled in the custom size list and rename the size as Receipt
    - Click OK to return to the Page Setup dialog box
    Under Settings: Page Attributes:
    - Format for "HP Officejet Pro 8600"
    - Paper size: Receipt
    - Orientation: "tall/portrait"
    - Scale: 100%
    - Click OK
    Select Format > Document and set all page margins to 0.25"
    Thank you. 

  • Azure Rest API PUT Block Blob Returns "The specified resource does not exist" CORS

    I am trying to upload a file to Azure Blob storage. For some reason when I try to put a new block blob on in the storage it tells me the resource does not exist. I am sure it is something silly I am missing.
    According to the documentation:
    The Put Blob operation creates a new block blob or page blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with Put Blob; the content of the
    existing blob is overwritten with the content of the new blob. To perform a partial update of the content of a block blob, use the Put Block List (REST API) operation.
    CORS is setup and that seems okay.
    When I do a preflight and get this:
    Request URL:https://<account>.blob.core.windows.net/test/image.png
    Request Method:OPTIONS
    Status Code:200 OK
    Request Headers
    OPTIONS /test/image.png HTTP/1.1
    Host: <account>.blob.core.windows.net
    Connection: keep-alive
    Cache-Control: no-cache
    Pragma: no-cache
    Access-Control-Request-Method: PUT
    Origin: http://www.<site>.com
    User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.117 Safari/537.36
    Access-Control-Request-Headers: accept, content-type
    Accept: */*
    Referer: http://www.<site>.com/azure/
    Accept-Encoding: gzip,deflate,sdch
    Accept-Language: en-US,en;q=0.8
    Response Headers
    HTTP/1.1 200 OK
    Transfer-Encoding: chunked
    Server: Blob Service Version 1.0 Microsoft-HTTPAPI/2.0
    x-ms-request-id: 0d372e95-1524-460a-ab9c-7973d42a7070
    Access-Control-Allow-Origin: http://www.<site>.com
    Access-Control-Allow-Methods: PUT
    Access-Control-Allow-Headers: accept, content-type
    Access-Control-Max-Age: 36000
    Access-Control-Allow-Credentials: true
    Date: Thu, 27 Feb 2014 22:43:52 GMT
    But when I make the PUT request these are the results.
    Request URL:https://<account>.blob.core.windows.net/test/image.png
    Request Method:PUT
    Status Code:404 The specified resource does not exist.
    Request Headers
    PUT /test/image.png HTTP/1.1
    Host: <account>.blob.core.windows.net
    Connection: keep-alive
    Content-Length: 22787
    Cache-Control: no-cache
    Pragma: no-cache
    x-ms-blob-content-dis; filename = "image.png"
    User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.117 Safari/537.36
    Content-Type: image/png
    x-ms-blob-type: BlockBlob
    Accept: application/json, text/plain, */*
    x-ms-version: 2013-08-15
    Origin: http://www.<site>.com
    x-ms-date: Thu, 27 Feb 2014 23:19:19 GMT
    Referer: http://www.<site>.com/azure/
    Accept-Encoding: gzip,deflate,sdch
    Accept-Language: en-US,en;q=0.8
    Response Headers
    HTTP/1.1 404 The specified resource does not exist.
    Content-Length: 223
    Content-Type: application/xml
    Server: Blob Service Version 1.0 Microsoft-HTTPAPI/2.0
    x-ms-request-id: d5a60c8b-356a-44ff-93af-0ea720b5591f
    x-ms-version: 2013-08-15
    Access-Control-Expose-Headers: x-ms-request-id,Server
    Access-Control-Allow-Origin: http://www.<site>.com
    Access-Control-Allow-Credentials: true
    Date: Thu, 27 Feb 2014 23:22:42 GMT

    Your request must be authenticated to be able to upload a blob. Please see our
    Windows Azure Storage: Introducing CORS blog post for more information on using Shared Access Signatures with CORS.

  • Physical disk IO size smaller than fragment block filesystem size ?

    Hello,
    in one default UFS filesystem we have 8K block size (bsize) and 1K fragmentsize (fsize). At this scenary I thought all "FileSytem IO" will be 8K (or greater) but never smaller than the fragment size (1K). If a UFS fragment/blocksize is allwasy several ADJACENTS sectors on disk (in a disk with sector=512B), all "physical disk IO" it will allways, like "Filesystem IO", greater than 1K.
    But with dtrace script from DTrace Toolkit (bitesize.d) I can see IO with 512B size.
    ¿What is wrong in my assumptions or what is the explanation?
    Thank you very much in advance!!

    rar wrote:
    Like Jim has indicated me in unix.com forum, That cross-post thread happens to be:
    http://www.unix.com/unix-advanced-expert-users/215823-physical-disk-io-size-smaller-than-fragment-block-filesystem-size.html
    You could have pasted the URL to be polite ...

  • Max_io_size equivalent in Linux and block/stripe sizes

    I'm configuring a linux Red Hat 7.1 for Oracle 9i Rel2. I'm trying to determine the best db_block_size, and db_file_multiblock_read_count parameters. I know that these Oracle settings are dependent on the OS block size and the max_io_size of the OS.
    Does anyone know what the equivalent Linux parameter for max_io_size (solaris) is and how I set it in Linux? Does resetting it involve reinstalling Linux? Any suggestions on an appropriate range to set it? Is the default Linux 1K block size OK? (The server is a Compaq DL380 with 1.4 GHertz processor and 1 GB RAM.)
    Additionally, I have a Compaq 5300 Series RAID, (5i-integrated), that we plan to configure with RAID 0+1. Our controller only goes up to a stripe size of 256K, with a default of 128K. For a "general"-type database that could hold up to 80 GB of data over 50 or so tables, with a possible equal number of full-table scans and indexed scans, would you suggest I set the stripe size at 256 for the most flexiblility down the road?
    I don't fully understand what it takes to configure Linux and RAID for the best I/O for Oracle. So, I'd really appreciate any suggestions, tips, or doc references that can help out.
    Thanks,
    Deb

    the ssd is both sd and ssd.. inside the sd.conf and inside the /etc/system...
    the following below is a tnf report of the IO size of my process to show the kernel is breaking the IO down.
    sorry i wasn't clear on this part..
    62.059582 16.185079 480 1 0x3000338ecc0 0 strategy device: 584115552256 block: 60396848 size: 1048576 buf: 0x30000a78340 flags: 34088209
    306.154426 17.819569 480 1 0x3000338ecc0 0 strategy device: 584115552256 block: 60398896 size: 1048576 buf: 0x300035dcc00 flags: 34088209

  • How to define our line size in module pool?

    Dear Friends,
    I have requirment to increse the length or size of a list which is calling itself from a module pool using leave list processing.
    Please suggest me the way?
    Regards
    Ricky

    It is very much same like report programming.
    Report z******       Message-id 38       Line-Size  190       Line-Count 0       No Standard Page Heading.
    use this report command on top of page to define the line size.

  • Blob sizes in mySql

    MySQL supports different blob sizes. I'm told that kodo does not support
    blobs in MySQL however I can override this by extending the MySQLDictionary
    class.
    This is what I was told.....
    "package com.xyz;
    import java.sql.*;
    import kodo.jdbc.schema.*;
    import kodo.jdbc.sql.*;
    public class CustomMySQLDictionary
    extends MySQLDictionary
    protected String appendSize (Column col, String typeName)
    if (col.getType () == Types.BLOB && col.getSize () > 0)
    return <sized blob string>
    return super.appendSize (col, typeName);
    Plug your dictionary into Kodo with:
    kodo.jdbc.DBDictionary: com.xyz.CustomMySQLDictionary
    In your metadata, set the size of your field with:
    <field name="blobField">
    <extension vendor-name="kodo" key="jdbc-size" value="xxx"/>
    </field>
    I have done this with a couple of minor changes. It almost does what I
    need it to do.
    Basically I want to create an ANT script which will enhance the necessary
    files, create my database, and then dump the schema to a file(so that I
    can have a sql script to run later). I have everything working with the
    exception of the of creating the database. The database gets created
    however the column where I specify blob, still gets created as a blob. I
    want to use a different blob size.
    The above code helped me with dumping the schema to a file and dumps the
    appropriate data as I would expect it.
    So how do I get the database created properly? Also, is there a way to
    automatically generate the schema without actually creating the database
    first?

    Mike Krell wrote:
    MySQL supports different blob sizes. I'm told that kodo does not support
    blobs in MySQL however I can override this by extending the MySQLDictionary
    class.
    This is what I was told.....
    "package com.xyz;
    import java.sql.*;
    import kodo.jdbc.schema.*;
    import kodo.jdbc.sql.*;
    public class CustomMySQLDictionary
    extends MySQLDictionary
    protected String appendSize (Column col, String typeName)
    if (col.getType () == Types.BLOB && col.getSize () > 0)
    return <sized blob string>
    return super.appendSize (col, typeName);
    Plug your dictionary into Kodo with:
    kodo.jdbc.DBDictionary: com.xyz.CustomMySQLDictionary
    In your metadata, set the size of your field with:
    <field name="blobField">
    <extension vendor-name="kodo" key="jdbc-size" value="xxx"/>
    </field>
    I have done this with a couple of minor changes. It almost does what I
    need it to do.
    Basically I want to create an ANT script which will enhance the necessary
    files, create my database, and then dump the schema to a file(so that I
    can have a sql script to run later). I have everything working with the
    exception of the of creating the database. The database gets created
    however the column where I specify blob, still gets created as a blob. I
    want to use a different blob size.
    The above code helped me with dumping the schema to a file and dumps the
    appropriate data as I would expect it.
    So how do I get the database created properly? Also, is there a way to
    automatically generate the schema without actually creating the database
    first?After doing some further investigation, the call "col.getType ()" returns
    a Types.VARBINARY and not a Types.BLOB. So I changed my code to test for
    this instead and also test for the column size to indicate the appropriate
    BLOB size. Is this correct?
    It appears that a LONGBLOB is coming back as a "LONG VARBINARY", even
    though I am testing for this, I cannot get a LONGBLOB. Why?
    Also, prior posts indicate that BLOBs are not supported in Kodo and MySQL.
    I'm confused as to what this means because in the src files that you
    delivered with your product, I see references to BLOB, MEDIUMBLOB, etc.
    The file I'm referring to is kodo.jdbc.sql.MySQLDictionary. So tell me
    again why blobs aren't supported?

  • FREE_SELECTIONS_INIT defining blocks on selection-screen.

    Hi,
    is it possible to define blocks on selection-screen if I use the function FREE_SELECTIONS_INIT.
    And is it possible to define the select-option obligatory.
    Best regards,
    Marcus

    hi
      Yes you can make select-options as obligatory by using
    SELECT-OPTIONS:
       s_abs for <some thing> OBLIGATORY
    and coming to the FM
    the function module receives information about
    the set of fields for which dynamic selections should be possible, and
    returns a selction ID
    i Think it is not possible
    regards
    Pavan

  • Urgent : OS Block header size (convert raw device to filesystem using dd)

    Hi,
    We need to convert oracle datafiles raw devices to filesystem.
    Environment: Oracle 8.1.7 on solaris 8.
    Our unix team is unable to tell the value of os block header size to be used for skip/iseek parameter of "dd" command. I know that rman "copy datafile" can be used but it does not help for converting redo log files.
    could someone please help.
    Thanks.
    Rakesh

    DB has been shutdown, cold backups have been taken and cannot make any changes at this point.
    Is it so tough to know this value ?
    Just to clarify, in the following dd command, what should be the value of skip :
    dd if=/dev/rdsk/rawfile1of=/data/file1 bs=8192 skip=??? count=100000
    Thanks,
    Rakesh

  • Is there a way to define the ideal size of the java heap memory?

    Hello all!
    Is there a way to define the ideal size the java heap memory? I'm using a server with (IR,FR,WA) installed and i'm using the Windows Server 2008 R2 with 32GB of ram memory. I have other server with the same configuration using essbase. How can i set the heap memory? I have around 250 users (not simultaneous).
    Regards,
    Rafael Melo
    Edited by: Rafael Melo on Aug 17, 2012 5:40 AM

    For 2008 which is 64 bit you can have
    For FR in windows registry
    HKEY_LOCAL_MACHINE\SOFTWARE\Hyperion Solutions\Hyperion Reports\HyS9FRReport
    Xms and Xmx can have 1536 each.
    For workspace
    Start “Start Workspace Agent UI” service and open Configuration Management
    Console (CMC) via http://localhost:55000/cmc/index.jsp
    for
    Workspace Agent / Common Services Java Heap size you can have
    Xms and Xmx as 1024 each.

  • I-phone 5 - can you reduce the megapixel size of pictures without incurring data charges?  I understand if I e-mail my pictures to myself, there is an option to define the pixel size of the pict. What if I upload from phone to computer via cable?

    i-phone 5 - can you reduce the megapixel size of pictures without incurring data charges?  I understand if I e-mail my pictures to myself, there is an option available that allows me to define the pixel size of the picture I send, but I believe this process inccurs data charges.
    What if I upload photos from my phone to my computer via cable? I don't believe this inccurs data charges, but I cannot find an option to reduce the megapixel size of the pictures.

    i-phone 5 - can you reduce the megapixel size of pictures without incurring data charges?  I understand if I e-mail my pictures to myself, there is an option available that allows me to define the pixel size of the picture I send, but I believe this process inccurs data charges.
    What if I upload photos from my phone to my computer via cable? I don't believe this inccurs data charges, but I cannot find an option to reduce the megapixel size of the pictures.

  • How can i transform a file stream into a stream of a block blob to download block blobs?

    I want to change this code to work with block blobs, but I can't seem to figure it out on my own
    // initialize the http content-disposition header to
    // indicate a file attachment with the default filename
    System.String disHeader = "Attachment; Filename=\"" + "sample.txt" +
    Response.AppendHeader("Content-Disposition", disHeader);
    // Download the blob to a file stream.
    using (FileStream fs = new FileStream(this.Server.MapPath(@"Data\data.txt"), FileMode.Open))
    int chunkSize = 4096;
    byte[] buffer = new byte[chunkSize];
    int read = 0;
    Response.AddHeader("Content-Length", fs.Length.ToString());
    do
    read = fs.Read(buffer, 0, chunkSize);
    Response.OutputStream.Write(buffer, 0, read);
    } while (read > 0);
    fs.Close();
    fs.Dispose();
    Response.End();

    Hi,
    What detailed scenario do you want to achieve? If you want to download files from block blob stream, please have a look at below article.
    #http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/
    If I misunderstand, please feel free to let me know.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for

  • How to send a file from FTP to external server

    My requirement is to send a file from FTP to D3(External) server. Now I am able to store the file in Appln server. I want to send the file created by the program thru FTP to D3 server. I know the username,Password,HostID,RFC destination details. How

  • The website is not centered in the middle of my screen, right side is gone and left side is white. Only in safetymode the screen is centered in the middle

    Websites are dislayed in the middle of the screen. Half the website is gone on the right side. on the left side it is white. Only when i startup in safetymode the websites are displayed in the center of the screen

  • Question S_QUERY and HR-AdHoc-Queries

    Hello, I've posted the following in the SAP-Solutions -> SAP-HCM forum, but got no answer yet; maybe this issue is more security related: We are planning to give our users access to transaction S_PH0_48000510 so they can execute queries on their own.

  • Strange Login in my history- IMEI/MEID

    Hey guys, I logged into the support community site earlier today to post a Logic X question.  Anyhow, to make a long story short, when I went the page in my history to check the status of my question. I am not hanhfromho chi minh.  How did this becom

  • Trying to configure an ISA500

    I am trying to configure an ISA500 (base model without built-in wireless). The customer has a Comcast cable modem, and a Linksys wireless router for his client machines. I have it set up to go from the modem to the ISA WAN port, and then one of the L