vSphere Integrated Containers Engine Version v1.1.1
v1.1.1 is tagged on releases/1.1.1
branch
Changes from v1.1.0 v1.1.0...v1.1.1
This is an interim pre-release and does not include support from VMware global support services (GSS). Support is OSS community level only. See https://github.com/vmware/vic/blob/master/CONTRIBUTING.md#community for details on how to contact the VIC Engine community.
Resolved Issues
The following issues found in vSphere Integrated Containers Engine 1.1.0 have been fixed in 1.1.1:
- Container VMs immediately power off with
Server error from portlayer: ServerFaultCode: Permission to perform this operation was denied
. #4817
This error results if the--ops-user
option was used when deploying VCHs. Operations user accounts require more permissions than were initially documented. The list of required permissions in Use Different User Accounts for VCH Deployment and Operation has been updated to include all of the required permissions. - Race condition in vSAN can cause VCH
kvstore
to enter an inconsistent state. #4601
VCHs store the key-value state for the VCH in a file on the datastore namedkvstore
. When values are updated a new version is uploaded askvstore.tmp
, which then overwrites the existing file. Race conditions can occur in VSAN if you upload a file and then quickly move that file.
If this condition occurs, thekvstore
of the VCH can enter an inconsistent state. If this error occurs you see the following error:Error response from daemon:
This error mostly occurs when running
failed to save image cache: [PUT /kv/{key}][500]
putValueInternalServerError &{Code:500 Message:Error uploading apiKV.dat:
File [vsanDatastore] 5568e458-4f51-10c5-3994-020...docker rmi
, but could also occur when performingdocker pull
,docker run
, ordocker create
on a new image. - Installing vSphere Client plug-in fails on VCSA. #4906
When you attempt to install the vSphere Client plug-in for vSphere Integrated Containers on a vCenter Server Appliance, the installation fails with the errorfailed to find target plugin
. - vSphere Web Client plug-in does not appear after successful installation. #4948
When you install the Flex plug-in for the vSphere Web Client, the installation process reports success but the plug-in does not appear in the vSphere Web Client. - vSphere Integrated Containers Engine files not upgraded. #5013
If you upgrade the vSphere Integrated Containers appliance from 1.1.0 to 1.1.1, vSphere Integrated Containers Registry and Management Portal upgrade successfully, but the downloads for vSphere Integrated Containers Engine remain at 1.1.0.
Known Issues
-
vRealize Automation cannot create VCH blueprints that use bridge network. #3542
vRealize Automation cannot create blueprints for VCHs that have either on-demand or existing bridge networks. If you deploy such blueprints, containers cannot be reached over the network. Only the default bridge network works in the VCH.Workaround: Use container networks instead of bridge networks in VCH blueprints.
-
HTML5 vSphere Client plug-in does not work with vCenter Server 6.5u1. #6052
If you deployed the HTML5 vSphere Client plug-in for vSphere Integrated Containers with vCenter Server 6.5.0, and then subsequently upgraded vCenter Server to version 6.5u1, the plug-in no longer works. Attempts to install the HTML5 plug-in on vCenter Server 6.5u1 fail.Workaround: This issue is resolved in vSphere Integrated Containers 1.2. Upgrade to vSphere Integrated Containers 1.2, upgrade the HTML5 vSphere Client plug-in, and restart the vSphere Client service. For information about upgrading to vSphere Integrated Containers 1.2, see Upgrading vSphere Integrated Containers.
-
vSphere Client plug-ins do not install on vCenter Server for Windows. #5204
When you run the/vic/ui/vCenterForWindows/install.bat
script to install either the HTML5 or Flex plug-ins for the vSphere Client, the installer reports success and the plug-in successfully registers as a vCenter Server extension, but the plug-ins do not appear in the vSphere Client or vSphere Web Client. An error ininstall.bat
prevents the plug-in files from uploading to vCenter Server.Workaround: Open
install.bat
in a text editor and insert a missingv
character before%version%.zip
in the following line:- Before:
SET PLUGIN_URL=%vic_ui_host_url%%key%-%version%.zip
- After:
SET PLUGIN_URL=%vic_ui_host_url%%key%-v%version%.zip
Then run
install.bat
. - Before:
-
Running
docker create
results inInvalidDeviceSpec
. #4666
When attempting to create a VMDK for the read-write layer of a container duringdocker create
, the parent VMDK sometimes cannot be accessed or located, resulting in anInvalidDeviceSpec
fault. This is specific to vSAN datastores.Workaround: Attempt to create the container again.
-
Cannot login to insecure registries that use self-signed certificates. #4681
If you deploy a VCH with the --insecure-registry option, and if that registry uses self-signed certificates, attempts to usedocker login
to log in to the registry fail withError response from daemon: Unexpected http code: 400, URL: http://X.X.X.X:443/v2/
. However, performingdocker pull
from that registry without attemptingdocker login
succeeds.Workaround: Download the self-signed certificate from the registry and redeploy the VCH, specifying the path to this certificate in the
--registry-ca
option. -
Docker client 1.13 returns an incorrect error message on non-existent objects. #4573
If you run a Docker command against a non-existent object, for exampledocker inspect fake
, wherefake
is an object that does not exist, vSphere Integrated Containers Engine reportsError response from daemon: vSphere Integrated Containers does not yet support Docker Swarm
. The error message should beError: No such image, container or task: fake
. -
Publishing all exposed ports to random ports with the -P option is not supported. #3000
vSphere Integrated Containers Engine does not supportdocker create/run -P
. -
Shared data volumes are not supported. #2303
vSphere Integrated Containers Engine does not support shared data volumes, meaning that multiple containers cannot share a common vSphere volume. As a consequence, using vSphere Integrated Containers Management Portal to provision applications that include containers that share volumes fails when using vSphere Integrated Containers Engine, with the errorServer error from portlayer: Failed to lock the file
. Do not design or import such templates in vSphere Integrated Containers Management Portal and do not attempt to deploy applications based on such templates when using vSphere Integrated Containers Engine. -
Occasional disconnection during vMotion. #4484
If you are attached to a container VM that is migrated by vMotion, the SSH connection to the container VM might drop when vMotion completes.Workaround: Perform
docker attach
after the vMotion completes to reattach to the container. -
Using volume labels with
docker-compose
causes a plugin error. #4540
Setting a label in a volume in the Docker compose YML file results inerror looking up volume plugin : plugin not found
.Workaround: Set the volume driver explicitly as
local
orvsphere
in the compose file. E.g.,volumes: volume_with_label: driver: local
-
VCH Admin portal does not respect proxy settings. #4557
This affects is the internet connectivity status on the VCH Admin portal, which does not use the proxy used by the rest of the VCH. -
vSphere Integrated Containers Management Portal cannot pull images from an insecure vSphere Integrated Containers instance when creating a container using vSphere Integrated Containers Engine. #4706
Creating a container in vSphere Integrated Containers Management Portal with vSphere Integrated Containers Engine as the only Docker host results in the errorcertificate signed by unknown authority
.Workarounds: Specify the vSphere Integrated Containers Registry port when you set the
vic-machine create--insecure-registry
option, or provide a CA certificate in the--registry-ca
option. -
Specifying the same datastore for volume store and images store leads to unintended volume loss on ESXi hosts. #4478
When deploying VCHs directly to ESXi hosts, if you specifyvic-machine create --name dev --image-store datastore1 --volume-store datastore1/dev:default
, volumes will go into the same folder as images and the VCH. If you then runvic-machine delete
, the volumes are deleted, even if you do no specify--force
. This does not occur when deploying to vCenter Server.
-
Containers have access to vSphere management assets. #3970
Containers that are attached to the bridge network can use NAT through the VCH and so have full access to assets on the management and client networks, or they can be reached via the gateway on those networks. As a consequence, any container can access to vSphere assets. -
Deleting container VMs by using the vSphere Client can remove the underlying image. #2928
If you delete a container VM by using the vSphere Client, attempts to create other containers that use the same base image containers can fail if the base image has been removed.Workaround: As stated in the documentation, always use Docker commands to perform operations on containers. Do not use the vSphere Client to perform operations on container VMs.
-
Deployment fails if you configure a VCH to use 4 NICs. #2802
A VCH supports a maximum of 3 distinct network interfaces. The bridge network requires its own port group, at least two of the public, client, and management networks must share a network interface and therefore a port group. Container networks do not go through the VCH, so they are not subject to this limitation. This limitation will be removed in a future release. -
vic-machine
and VCH do not support creation of resources within inventory folders. #3619
This capability will be added in a future release. -
Image store is in the wrong directory if the datastore already has a directory with the same name. #3365
If the datastore already has a directory with the same name as the VCH, and the directory does not have a VM, vic-machine creates the VCH correctly names the folder a slightly different name. Example, folder "test_1" with vch named "test". The kvstore is located in "test_1" folder correctly, but image files are still in the "test" directory. -
Deployment with static IP takes a long time. #3436
If you deploy a VCH with a static IP, the deployment might take longer than expected, resulting in timeouts.
Workaround: Increase the timeout for the deployment when using static IP. -
Firewall status delayed on vCenter Server. #3139
If you update the firewall rules on an ESXi host to allow access from specific IP addresses, and if that host is managed by vCenter Server, there might be a delay before vCenter Server takes the updated firewall rule into account. In this case, vCenter Server continues to use the old configuration for an indeterminate amount of time after you have made the update.vic-machine create
can successfully deploy a VCH with an address that you have blocked, or else fail when you deploy a VCH with an address that you have permitted.Workaround: Wait a few minutes and run
vic-machine create
again. -
Piping information into
busybox
fails. #3017
If you attempt to pipe information intobusybox
, for example by runningecho test | docker run -i busybox cat
, the operation fails with the following error:Error response from daemon: Server error from portlayer: ContainerWaitHandler(container_id) Error: context deadline exceeded
-
vic-machine delete does not recognize virtual container hosts that were not fully created. #2981
vic-machine delete
fails when you run it on a virtual container host that was not fully created.Workaround: Manually delete any components of a partial installation, for example, the virtual container host vApp, the endpoint VM, and datastore folders.
-
Container fails to shut down with
Error response from daemon: server error from portlayer : [DELETE /containers/{id}][500] containerRemoveInternalServerError.
#1823
Workaround: Developers: rundocker create
again. Administrators: Un-register and re-register the VM in the vSphere UI. -
Mounting directories as a data volume using the
-v
option is not supported. #2303 -
When you pull a large sized image from Harbor into a virtual container host, you get an error that the /tmp partition reached capacity. #3624
docker: Failed to fetch image blob: weblogic/test_domain/sha256:3bf21a5a3fdf6586732efc8c64581ae1b4c75e342b210c1b6f799a64bffd7924 returned download failed: write /tmp/3bf21a5a3fdf346188145: no space left on device.
Workaround: Deploy the virtual container host with--appliance-memory=4096
which increases the appliance memory configuration. -
Installing the virtual container host using a short hostname fails. #2582
Workaround:- The IP address that you provide to
vic-machine create target
must be reachable on the management network. - If you use a DNS name instead of an IP address, the virtual container host endpoint VM must be able to resolve the name using the DNS server that is configured either by DHCP or by the
vic-machine create --dns-server
option. There is no default search domain, so use the FQDN.
- The IP address that you provide to
-
Pulling all tagged images in a repository is not supported. #2724
vSphere Integrated Containers only attempts to pull the latest tagged images. -
vSphere Integrated Containers fails to delete the vApp that remains after a virtual container host creation fails. #2853
- Container VM fails to start on VIC backed by a VVOL datastore. #2242
VVOL datastores are not supported in this release. - Attaching the same container from multiple terminals causes problems. #2214
- --net=none is not supported. #2108
- VCH restarts if required process cannot be restarted. #2099
The system attempts to restart a finite number of times, then reports an error, leaving the VCH up and running to download logs. Instead, VCH immediately reboots. - vic-machine incorrectly assumes conf.ImageStores[0] is the appliance datastore. #1884
- When some of the hosts in the cluster are not attached to the dVS and do not have access to the bridge network, the error message is not easily readable. #1647
- Image manifest validation for pulled images is not supported. #1331
- Setting up overlay networks is not supported. #1222
Error response from daemon: scope type not supported - vic-machine can connect to the target but the VCH appliance cannot. #3479
The VCH cannot get an IP address on the management network or does not have a route to the specified target. - Adding folder options to vic-machine is not yet implemented. #773
- Adding mapped vSphere networks to running containers is not yet implemented. #745
- Adding bridge networks to running containers is not yet implemented. #743
- Mapping an existing vSphere level network into the Docker network to explicitly provide a container with a route not through the VCH appliance is not yet implemented. #441
-
Incorrect image digest format sent to Docker client #1484
docker images --digests
is not supported.Workaround: Pull images by tag instead.
-
docker pull
results an "already exists" error #1409
If a context deadline exceeded error occurs on the port layer while performing an image pull, it causes an inconsistent state for the image. Pulls can also take a very long time with a slow network connection. -
vic-machine create
validation fails if a dvSwitch exists on an ESXi target #729
Download Binaries
- Official VMware vSphere Integated Containers 1.1.1 release: http://www.vmware.com/go/download-vic
- Open-source vSphere Integrated Containers Engine project: https://storage.googleapis.com/vic-engine-releases/vic_1.1.1.tar.gz
Installation
For instructions about how to deploy a vSphere Integrated Containers Engine virtual container host, see Using vic-machine to Deploy Virtual Container Hosts in vSphere Integrated Containers for vSphere Administrators.
Using vSphere Integrated Containers Engine
For more details on using vSphere Integrated Containers Engine see the end user documentation at https://vmware.github.io/vic-product/index.html#getting-started.
Open Source Components
The copyright statements and licenses applicable to the open source software components distributed in vSphere Integrated Containers Engine are available in the LICENSE file.