(1) how can we build such a model and export it as PMML file?
You have pictured a (6, 3, 6) NN. It is architecturally identical to what is implemented by Scikit-Learn's MLPRegressor class.
You can emulate autoencoder using MLPRegressor; in the current case, you would define a NN with a single hidden layer (containing three neurons), and train it with X == y:
autoencoder = MLPRegressor(hidden_layer_sizes = (3, ))
autoencoder.fit(X, X)
(2) is PMML capable to encode such model structure?
PMML is capable of representing full-blown MLPRegressor objects using the NeuralNetwork model element. Therefore, it's also capable of representing its "truncated" variants such as autoencoders.
The idea is to simply ignore the last (ie. rightmost) layer during conversion. Effectively, the pictured (6, 3, 6) NN gets truncated to (6, 3) NN.
The SkLearn2PMML package provides the sklearn2pmml.neural_network.MLPTransformer transformer for this purpose.
(3) what are the necessary component in PMML to generate N output nodes in this model?
There is no need to generate anything extra.
The truncated (6, 3) NN provides three outputs y(0), y(1) and y(2), which you may then pass forward to other transformers or models.